From 1cb50f6785d4c648df92d9ba1a22ab09f0183b19 Mon Sep 17 00:00:00 2001 From: Tiago Loureiro Date: Mon, 4 Mar 2019 14:31:20 +0100 Subject: [PATCH 01/23] Updated emails --- services/brig/deb/opt/brig/template-version | 2 +- .../deb/opt/brig/templates/de/provider/email/activation.html | 2 +- .../opt/brig/templates/de/provider/email/approval-confirm.html | 2 +- .../opt/brig/templates/de/provider/email/approval-request.html | 2 +- .../brig/deb/opt/brig/templates/de/team/email/invitation.html | 2 +- .../opt/brig/templates/de/team/email/new-member-welcome.html | 2 +- .../brig/deb/opt/brig/templates/de/user/email/activation.html | 2 +- .../brig/deb/opt/brig/templates/de/user/email/deletion.html | 2 +- .../brig/deb/opt/brig/templates/de/user/email/new-client.html | 2 +- .../deb/opt/brig/templates/de/user/email/password-reset.html | 2 +- .../deb/opt/brig/templates/de/user/email/team-activation.html | 2 +- services/brig/deb/opt/brig/templates/de/user/email/update.html | 2 +- .../brig/deb/opt/brig/templates/de/user/email/verification.html | 2 +- .../deb/opt/brig/templates/en/provider/email/activation.html | 2 +- .../opt/brig/templates/en/provider/email/approval-confirm.html | 2 +- .../opt/brig/templates/en/provider/email/approval-request.html | 2 +- .../brig/deb/opt/brig/templates/en/team/email/invitation.html | 2 +- .../opt/brig/templates/en/team/email/new-member-welcome.html | 2 +- .../brig/deb/opt/brig/templates/en/user/email/activation.html | 2 +- .../brig/deb/opt/brig/templates/en/user/email/deletion.html | 2 +- .../brig/deb/opt/brig/templates/en/user/email/new-client.html | 2 +- .../deb/opt/brig/templates/en/user/email/password-reset.html | 2 +- .../deb/opt/brig/templates/en/user/email/team-activation.html | 2 +- services/brig/deb/opt/brig/templates/en/user/email/update.html | 2 +- .../brig/deb/opt/brig/templates/en/user/email/verification.html | 2 +- .../brig/deb/opt/brig/templates/et/user/email/activation.html | 2 +- .../brig/deb/opt/brig/templates/et/user/email/deletion.html | 2 +- .../brig/deb/opt/brig/templates/et/user/email/new-client.html | 2 +- .../deb/opt/brig/templates/et/user/email/password-reset.html | 2 +- .../deb/opt/brig/templates/et/user/email/team-activation.html | 2 +- services/brig/deb/opt/brig/templates/et/user/email/update.html | 2 +- .../brig/deb/opt/brig/templates/et/user/email/verification.html | 2 +- .../brig/deb/opt/brig/templates/fr/user/email/activation.html | 2 +- .../brig/deb/opt/brig/templates/fr/user/email/deletion.html | 2 +- .../brig/deb/opt/brig/templates/fr/user/email/new-client.html | 2 +- .../deb/opt/brig/templates/fr/user/email/password-reset.html | 2 +- .../deb/opt/brig/templates/fr/user/email/team-activation.html | 2 +- services/brig/deb/opt/brig/templates/fr/user/email/update.html | 2 +- .../brig/deb/opt/brig/templates/fr/user/email/verification.html | 2 +- .../brig/deb/opt/brig/templates/lt/user/email/activation.html | 2 +- .../brig/deb/opt/brig/templates/lt/user/email/deletion.html | 2 +- .../brig/deb/opt/brig/templates/lt/user/email/new-client.html | 2 +- .../deb/opt/brig/templates/lt/user/email/password-reset.html | 2 +- .../deb/opt/brig/templates/lt/user/email/team-activation.html | 2 +- services/brig/deb/opt/brig/templates/lt/user/email/update.html | 2 +- .../brig/deb/opt/brig/templates/lt/user/email/verification.html | 2 +- .../brig/deb/opt/brig/templates/ru/user/email/activation.html | 2 +- .../brig/deb/opt/brig/templates/ru/user/email/deletion.html | 2 +- .../brig/deb/opt/brig/templates/ru/user/email/new-client.html | 2 +- .../deb/opt/brig/templates/ru/user/email/password-reset.html | 2 +- .../deb/opt/brig/templates/ru/user/email/team-activation.html | 2 +- services/brig/deb/opt/brig/templates/ru/user/email/update.html | 2 +- .../brig/deb/opt/brig/templates/ru/user/email/verification.html | 2 +- services/brig/deb/opt/brig/templates/version | 2 +- 54 files changed, 54 insertions(+), 54 deletions(-) diff --git a/services/brig/deb/opt/brig/template-version b/services/brig/deb/opt/brig/template-version index e6d41e92977..a51152c9bcc 100644 --- a/services/brig/deb/opt/brig/template-version +++ b/services/brig/deb/opt/brig/template-version @@ -1 +1 @@ -v1.0.55 +v1.0.56 diff --git a/services/brig/deb/opt/brig/templates/de/provider/email/activation.html b/services/brig/deb/opt/brig/templates/de/provider/email/activation.html index e2d5a5920cb..fc8e6222217 100644 --- a/services/brig/deb/opt/brig/templates/de/provider/email/activation.html +++ b/services/brig/deb/opt/brig/templates/de/provider/email/activation.html @@ -1 +1 @@ -Dein ${brand_service}-Benutzerkonto

${brand_label_url}

Bestätige deine E-Mail-Adresse

Deine E-Mail-Adresse ${email} wurde benutzt, um sich als ${brand_service} Serviceanbieter zu registrieren.

Um die Registrierung abzuschließen, bestätige bitte deine E-Mail-Adresse, indem du auf den unteren Button klickst.

Bitte beachte, dass das ${brand_service} nach der Bestätigung der E-Mail-Adresse noch durch uns freigeschaltet werden muss. Dies erfolgt üblicherweise innerhalb von 24 Stunden. Über die Freischaltung wirst du per E-Mail informiert.

 
Überprüfe
 

Falls du nicht auf den Button klicken kannst, kopiere den Link und füge ihn in deinem Browser ein:

${url}

Solltest du dich nicht für eine ${brand} Service-Provider-Konto mit dieser E-Mail-Adresse registriert haben, kannst du diese Meldung ignorieren. Falls du einen Missbrauch deiner E-Mail-Adresse melden möchtest kontaktiere uns bitte.

Bitte antworte nicht auf diese E-Mail.

                                                           
\ No newline at end of file +Dein ${brand_service}-Benutzerkonto

${brand_label_url}

Bestätige deine E-Mail-Adresse

Deine E-Mail-Adresse ${email} wurde benutzt, um sich als ${brand_service} Serviceanbieter zu registrieren.

Um die Registrierung abzuschließen, bestätige bitte deine E-Mail-Adresse, indem du auf den unteren Button klickst.

Bitte beachte, dass das ${brand_service} nach der Bestätigung der E-Mail-Adresse noch durch uns freigeschaltet werden muss. Dies erfolgt üblicherweise innerhalb von 24 Stunden. Über die Freischaltung wirst du per E-Mail informiert.

 
Überprüfe
 

Falls du nicht auf den Button klicken kannst, kopiere den Link und füge ihn in deinem Browser ein:

${url}

Solltest du dich nicht für eine ${brand} Service-Provider-Konto mit dieser E-Mail-Adresse registriert haben, kannst du diese Meldung ignorieren. Falls du einen Missbrauch deiner E-Mail-Adresse melden möchtest kontaktiere uns bitte.

Bitte antworte nicht auf diese E-Mail.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/de/provider/email/approval-confirm.html b/services/brig/deb/opt/brig/templates/de/provider/email/approval-confirm.html index 844327c5a6d..53301579858 100644 --- a/services/brig/deb/opt/brig/templates/de/provider/email/approval-confirm.html +++ b/services/brig/deb/opt/brig/templates/de/provider/email/approval-confirm.html @@ -1 +1 @@ -Dein ${brand_service}-Benutzerkonto

${brand_label_url}

Hallo,

Wir freuen uns dir mitzuteilen, dass du jetzt ein genehmigter ${brand_service} bist.

Bitte antworte nicht auf diese E-Mail.

Wenn du dich nicht für ein ${brand_service}-Konto mit dieser E-Mail-Adresse registriert hast, bitte kontaktiere uns.

                                                           
\ No newline at end of file +Dein ${brand_service}-Benutzerkonto

${brand_label_url}

Hallo,

Wir freuen uns dir mitzuteilen, dass du jetzt ein genehmigter ${brand_service} bist.

Bitte antworte nicht auf diese E-Mail.

Wenn du dich nicht für ein ${brand_service}-Konto mit dieser E-Mail-Adresse registriert hast, bitte kontaktiere uns.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/de/provider/email/approval-request.html b/services/brig/deb/opt/brig/templates/de/provider/email/approval-request.html index 862723a5e71..685afc0cbd8 100644 --- a/services/brig/deb/opt/brig/templates/de/provider/email/approval-request.html +++ b/services/brig/deb/opt/brig/templates/de/provider/email/approval-request.html @@ -1 +1 @@ -Anfrage genehmigen: ${brand_service}

${brand_label_url}

Anfrage genehmigen

Ein neuer ${brand_service} hat sich registriert und erwartet deine Genehmigung. Bitte überprüfe die folgenden Informationen.

Name: ${name}

E-mail: ${email}

Website: ${url}

Beschreibung: ${description}

Wenn die Anfrage echt scheint, kannst du den Service-Provider genehmigen, indem du auf den unteren Button klickst. Sobald genehmigt, kann der Service-Provider sich anmelden und anfangen Dienste zu registrieren, die ${brand} Nutzer zu ihren Unterhaltungen hinzufügen werden können.

Falls die Anfrage zweifelhaft scheint, kontaktiere bitte den Service-Provider für Klarstellung bevor du fortfährst.

 
Genehmigen
 

Falls du nicht auf den Button klicken kannst, kopiere den Link und füge ihn in deinem Browser ein:

${url}

Bitte antworte nicht auf diese E-Mail.

                                                           
\ No newline at end of file +Anfrage genehmigen: ${brand_service}

${brand_label_url}

Anfrage genehmigen

Ein neuer ${brand_service} hat sich registriert und erwartet deine Genehmigung. Bitte überprüfe die folgenden Informationen.

Name: ${name}

E-mail: ${email}

Website: ${url}

Beschreibung: ${description}

Wenn die Anfrage echt scheint, kannst du den Service-Provider genehmigen, indem du auf den unteren Button klickst. Sobald genehmigt, kann der Service-Provider sich anmelden und anfangen Dienste zu registrieren, die ${brand} Nutzer zu ihren Unterhaltungen hinzufügen werden können.

Falls die Anfrage zweifelhaft scheint, kontaktiere bitte den Service-Provider für Klarstellung bevor du fortfährst.

 
Genehmigen
 

Falls du nicht auf den Button klicken kannst, kopiere den Link und füge ihn in deinem Browser ein:

${url}

Bitte antworte nicht auf diese E-Mail.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/de/team/email/invitation.html b/services/brig/deb/opt/brig/templates/de/team/email/invitation.html index 66cb8bb1a3d..f19ae5f1928 100644 --- a/services/brig/deb/opt/brig/templates/de/team/email/invitation.html +++ b/services/brig/deb/opt/brig/templates/de/team/email/invitation.html @@ -1 +1 @@ -Du wurdest eingeladen einem Team auf ${brand} beizutreten

${brand_label_url}

Team Einladung

${inviter} hat dich auf ${brand} zu einem Team eingeladen. Klicke auf den nachstehenden Button um die Einladung zu akzeptieren.

 
Einladung akzeptieren
 

Falls du nicht auf den Button klicken kannst, kopiere den Link und füge ihn in deinem Browser ein:

${url}

Wenn du Fragen hast, dann kontaktiere uns bitte.

                                                           
\ No newline at end of file +Du wurdest eingeladen einem Team auf ${brand} beizutreten

${brand_label_url}

Team Einladung

${inviter} hat dich auf ${brand} zu einem Team eingeladen. Klicke auf den nachstehenden Button um die Einladung zu akzeptieren.

 
Einladung akzeptieren
 

Falls du nicht auf den Button klicken kannst, kopiere den Link und füge ihn in deinem Browser ein:

${url}

Wenn du Fragen hast, dann kontaktiere uns bitte.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/de/team/email/new-member-welcome.html b/services/brig/deb/opt/brig/templates/de/team/email/new-member-welcome.html index ff71b078a39..5d776fd4fd7 100644 --- a/services/brig/deb/opt/brig/templates/de/team/email/new-member-welcome.html +++ b/services/brig/deb/opt/brig/templates/de/team/email/new-member-welcome.html @@ -1 +1 @@ -Du bist einem Team auf ${brand} beigetreten

${brand_label_url}

Willkommen zu ${team_name}.

Du bist soeben mit ${email} einem Team namens ${team_name} auf ${brand} beigetreten.

 

${brand} vereint sichere Verschlüsselung mit reichhaltigem Funktionsumfang und einfacher Bedienung in einer einzigen App. Unterstützt alle gängigen Plattformen.

 
${brand} herunterladen
 

Falls du nicht auf den Button klicken kannst, kopiere den Link und füge ihn in deinem Browser ein:

${url}

Wenn du Fragen hast, dann kontaktiere uns bitte.

Team ID: ${team_id}

                                                           
\ No newline at end of file +Du bist einem Team auf ${brand} beigetreten

${brand_label_url}

Willkommen zu ${team_name}.

Du bist soeben mit ${email} einem Team namens ${team_name} auf ${brand} beigetreten.

 

${brand} vereint sichere Verschlüsselung mit reichhaltigem Funktionsumfang und einfacher Bedienung in einer einzigen App. Unterstützt alle gängigen Plattformen.

 
${brand} herunterladen
 

Falls du nicht auf den Button klicken kannst, kopiere den Link und füge ihn in deinem Browser ein:

${url}

Wenn du Fragen hast, dann kontaktiere uns bitte.

Team ID: ${team_id}

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/de/user/email/activation.html b/services/brig/deb/opt/brig/templates/de/user/email/activation.html index 2d59aecade7..c3840eb7f11 100644 --- a/services/brig/deb/opt/brig/templates/de/user/email/activation.html +++ b/services/brig/deb/opt/brig/templates/de/user/email/activation.html @@ -1 +1 @@ -Dein ${brand}-Benutzerkonto

${brand_label_url}

Bestätige deine E-Mail-Adresse

${email} wurde verwendet um ein Konto auf ${brand} zu erstellen.
Klicke den folgenden Button um die E-Mail-Adresse zu bestätigen.

 
Überprüfe
 

Falls du nicht auf den Button klicken kannst, kopiere den Link und füge ihn in deinem Browser ein:

${url}

Wenn du Fragen hast, dann kontaktiere uns bitte.

                                                           
\ No newline at end of file +Dein ${brand}-Benutzerkonto

${brand_label_url}

Bestätige deine E-Mail-Adresse

${email} wurde verwendet um ein Konto auf ${brand} zu erstellen.
Klicke den folgenden Button um die E-Mail-Adresse zu bestätigen.

 
Überprüfe
 

Falls du nicht auf den Button klicken kannst, kopiere den Link und füge ihn in deinem Browser ein:

${url}

Wenn du Fragen hast, dann kontaktiere uns bitte.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/de/user/email/deletion.html b/services/brig/deb/opt/brig/templates/de/user/email/deletion.html index fcc35ae74a2..cff601a47d3 100644 --- a/services/brig/deb/opt/brig/templates/de/user/email/deletion.html +++ b/services/brig/deb/opt/brig/templates/de/user/email/deletion.html @@ -1 +1 @@ -Konto löschen?

${brand_label_url}

Lösche dein Konto

Wir haben eine Anfrage zur Löschung deines ${brand}-Benutzerkontos erhalten. Klicke innerhalb der nächsten 10 Minuten auf den nachstehenden Link, um alle deine Unterhaltungen, Nachrichten und Kontakte zu löschen.

 
Konto löschen
 

Falls du nicht auf den Button klicken kannst, kopiere den Link und füge ihn in deinem Browser ein:

${url}

Falls du dies nicht beantragt hast, setze dein Passwort zurück.

Wenn du Fragen hast, dann kontaktiere uns bitte.

                                                           
\ No newline at end of file +Konto löschen?

${brand_label_url}

Lösche dein Konto

Wir haben eine Anfrage zur Löschung deines ${brand}-Benutzerkontos erhalten. Klicke innerhalb der nächsten 10 Minuten auf den nachstehenden Link, um alle deine Unterhaltungen, Nachrichten und Kontakte zu löschen.

 
Konto löschen
 

Falls du nicht auf den Button klicken kannst, kopiere den Link und füge ihn in deinem Browser ein:

${url}

Falls du dies nicht beantragt hast, setze dein Passwort zurück.

Wenn du Fragen hast, dann kontaktiere uns bitte.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/de/user/email/new-client.html b/services/brig/deb/opt/brig/templates/de/user/email/new-client.html index 16af344a1e0..b25cbb52961 100644 --- a/services/brig/deb/opt/brig/templates/de/user/email/new-client.html +++ b/services/brig/deb/opt/brig/templates/de/user/email/new-client.html @@ -1 +1 @@ -Neues Gerät

${brand_label_url}

Neues Gerät

Ein neues Gerät wurde zu deinem ${brand}-Benutzerkonto hinzugefügt:

${date}

${model}

Du hast ${brand} vermutlich auf einem neuen Gerät installiert oder dich auf einem bestehenden Gerät erneut eingeloggt. Falls dies nicht der Fall ist, gehe in deine ${brand} Einstellungen, entferne das Gerät und setze dein Passwort zurück.

Wenn du Fragen hast, dann kontaktiere uns bitte.

                                                           
\ No newline at end of file +Neues Gerät

${brand_label_url}

Neues Gerät

Ein neues Gerät wurde zu deinem ${brand}-Benutzerkonto hinzugefügt:

${date}

${model}

Du hast ${brand} vermutlich auf einem neuen Gerät installiert oder dich auf einem bestehenden Gerät erneut eingeloggt. Falls dies nicht der Fall ist, gehe in deine ${brand} Einstellungen, entferne das Gerät und setze dein Passwort zurück.

Wenn du Fragen hast, dann kontaktiere uns bitte.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/de/user/email/password-reset.html b/services/brig/deb/opt/brig/templates/de/user/email/password-reset.html index 48de102b5da..280fc931845 100644 --- a/services/brig/deb/opt/brig/templates/de/user/email/password-reset.html +++ b/services/brig/deb/opt/brig/templates/de/user/email/password-reset.html @@ -1 +1 @@ -Änderung des Passworts auf ${brand}

${brand_label_url}

Passwort zurücksetzen

Wir haben eine Anfrage zum Zurücksetzen des Passworts für dein ${brand}-Benutzerkonto erhalten. Klicke auf den nachstehenden Button, um ein neues Passwort zu erstellen.

 
Passwort zurücksetzen
 

Falls du nicht auf den Button klicken kannst, kopiere den Link und füge ihn in deinem Browser ein:

${url}

Wenn du Fragen hast, dann kontaktiere uns bitte.

                                                           
\ No newline at end of file +Änderung des Passworts auf ${brand}

${brand_label_url}

Passwort zurücksetzen

Wir haben eine Anfrage zum Zurücksetzen des Passworts für dein ${brand}-Benutzerkonto erhalten. Klicke auf den nachstehenden Button, um ein neues Passwort zu erstellen.

 
Passwort zurücksetzen
 

Falls du nicht auf den Button klicken kannst, kopiere den Link und füge ihn in deinem Browser ein:

${url}

Wenn du Fragen hast, dann kontaktiere uns bitte.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/de/user/email/team-activation.html b/services/brig/deb/opt/brig/templates/de/user/email/team-activation.html index 0287f7fa751..40ff675d651 100644 --- a/services/brig/deb/opt/brig/templates/de/user/email/team-activation.html +++ b/services/brig/deb/opt/brig/templates/de/user/email/team-activation.html @@ -1 +1 @@ -${brand} Benutzerkonto

${brand_label_url}

Dein neues ${brand}-Benutzerkonto

Ein neues ${brand} Team wurde mit ${email} erstellt. Bitte verifiziere deine E-Mail-Adresse.

 
Überprüfe
 

Falls du nicht auf den Button klicken kannst, kopiere den Link und füge ihn in deinem Browser ein:

${url}

Wenn du Fragen hast, dann kontaktiere uns bitte.

                                                           
\ No newline at end of file +${brand} Benutzerkonto

${brand_label_url}

Dein neues ${brand}-Benutzerkonto

Ein neues ${brand} Team wurde mit ${email} erstellt. Bitte verifiziere deine E-Mail-Adresse.

 
Überprüfe
 

Falls du nicht auf den Button klicken kannst, kopiere den Link und füge ihn in deinem Browser ein:

${url}

Wenn du Fragen hast, dann kontaktiere uns bitte.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/de/user/email/update.html b/services/brig/deb/opt/brig/templates/de/user/email/update.html index 1b08f3ccc61..5ef41228598 100644 --- a/services/brig/deb/opt/brig/templates/de/user/email/update.html +++ b/services/brig/deb/opt/brig/templates/de/user/email/update.html @@ -1 +1 @@ -Deine neue E-Mail-Adresse auf ${brand}

${brand_label_url}

Bestätige deine E-Mail-Adresse

${email} wurde als deine neue E-Mail-Adresse auf ${brand} registriert. Klicke auf den nachstehenden Button, um deine neue Adresse zu bestätigen.

 
Überprüfe
 

Falls du nicht auf den Button klicken kannst, kopiere den Link und füge ihn in deinem Browser ein:

${url}

Wenn du Fragen hast, dann kontaktiere uns bitte.

                                                           
\ No newline at end of file +Deine neue E-Mail-Adresse auf ${brand}

${brand_label_url}

Bestätige deine E-Mail-Adresse

${email} wurde als deine neue E-Mail-Adresse auf ${brand} registriert. Klicke auf den nachstehenden Button, um deine neue Adresse zu bestätigen.

 
Überprüfe
 

Falls du nicht auf den Button klicken kannst, kopiere den Link und füge ihn in deinem Browser ein:

${url}

Wenn du Fragen hast, dann kontaktiere uns bitte.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/de/user/email/verification.html b/services/brig/deb/opt/brig/templates/de/user/email/verification.html index 016660e86d5..aa2b6b93f33 100644 --- a/services/brig/deb/opt/brig/templates/de/user/email/verification.html +++ b/services/brig/deb/opt/brig/templates/de/user/email/verification.html @@ -1 +1 @@ -${brand} Bestätigungs-Code ist ${code}

${brand_label_url}

Bestätige deine E-Mail-Adresse

${email} wurde verwendet um ein Benutzerkonto auf ${brand} zu erstellen. Gib diesen Code ein, um deine E-Mail-Adresse zu verifizieren und dein Konto zu erstellen.

 

${code}

 

Wenn du Fragen hast, dann kontaktiere uns bitte.

                                                           
\ No newline at end of file +${brand} Bestätigungs-Code ist ${code}

${brand_label_url}

Bestätige deine E-Mail-Adresse

${email} wurde verwendet um ein Benutzerkonto auf ${brand} zu erstellen. Gib diesen Code ein, um deine E-Mail-Adresse zu verifizieren und dein Konto zu erstellen.

 

${code}

 

Wenn du Fragen hast, dann kontaktiere uns bitte.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/en/provider/email/activation.html b/services/brig/deb/opt/brig/templates/en/provider/email/activation.html index 5858bdd4c3c..93e9fe1f829 100644 --- a/services/brig/deb/opt/brig/templates/en/provider/email/activation.html +++ b/services/brig/deb/opt/brig/templates/en/provider/email/activation.html @@ -1 +1 @@ -Your ${brand_service} Account

${brand_label_url}

Verify your email

Your email address ${email} was used to register as a ${brand_service}.

To complete the registration, it is necessary that you verify your e-mail address by clicking on the button below.

Please note that upon successful verification of your e-mail, your ${brand_service} account is still subject to approval through our staff, which usually happens within 24 hours. You will be informed of the approval via a separate e-mail.

 
Verify
 

If you can’t click the button, copy and paste this link to your browser:

${url}

If you didn’t register for a ${brand} service provider account using this e-mail address, you can safely ignore this message. If you want to report abuse of your e-mail address, please contact us.

Please don’t reply to this message.

                                                           
\ No newline at end of file +Your ${brand_service} Account

${brand_label_url}

Verify your email

Your email address ${email} was used to register as a ${brand_service}.

To complete the registration, it is necessary that you verify your e-mail address by clicking on the button below.

Please note that upon successful verification of your e-mail, your ${brand_service} account is still subject to approval through our staff, which usually happens within 24 hours. You will be informed of the approval via a separate e-mail.

 
Verify
 

If you can’t click the button, copy and paste this link to your browser:

${url}

If you didn’t register for a ${brand} service provider account using this e-mail address, you can safely ignore this message. If you want to report abuse of your e-mail address, please contact us.

Please don’t reply to this message.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/en/provider/email/approval-confirm.html b/services/brig/deb/opt/brig/templates/en/provider/email/approval-confirm.html index 86cc1cf3884..abf727b4406 100644 --- a/services/brig/deb/opt/brig/templates/en/provider/email/approval-confirm.html +++ b/services/brig/deb/opt/brig/templates/en/provider/email/approval-confirm.html @@ -1 +1 @@ -Your ${brand_service} Account

${brand_label_url}

Hello,

We are happy to inform you that you are now an approved ${brand_service}.

Please don’t reply to this message.

If you didn’t register for a ${brand_service} account using this e-mail address, please contact us.

                                                           
\ No newline at end of file +Your ${brand_service} Account

${brand_label_url}

Hello,

We are happy to inform you that you are now an approved ${brand_service}.

Please don’t reply to this message.

If you didn’t register for a ${brand_service} account using this e-mail address, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/en/provider/email/approval-request.html b/services/brig/deb/opt/brig/templates/en/provider/email/approval-request.html index c0c1f2d954e..fa32425af4b 100644 --- a/services/brig/deb/opt/brig/templates/en/provider/email/approval-request.html +++ b/services/brig/deb/opt/brig/templates/en/provider/email/approval-request.html @@ -1 +1 @@ -Approval Request: ${brand_service}

${brand_label_url}

Approval request

A new ${brand_service} has registered and is awaiting approval. Please review the information provided below.

Name: ${name}

E-mail: ${email}

Website: ${url}

Description: ${description}

If the request seems genuine, you can approve the provider by clicking on the button below. Once approved, the provider will be able to sign in and start registering services that ${brand} users can add to their conversations.

If the request seems dubious, please contact the provider for clarifications before proceeding.

 
Approve
 

If you can’t click the button, copy and paste this link to your browser:

${url}

Please don’t reply to this message.

                                                           
\ No newline at end of file +Approval Request: ${brand_service}

${brand_label_url}

Approval request

A new ${brand_service} has registered and is awaiting approval. Please review the information provided below.

Name: ${name}

E-mail: ${email}

Website: ${url}

Description: ${description}

If the request seems genuine, you can approve the provider by clicking on the button below. Once approved, the provider will be able to sign in and start registering services that ${brand} users can add to their conversations.

If the request seems dubious, please contact the provider for clarifications before proceeding.

 
Approve
 

If you can’t click the button, copy and paste this link to your browser:

${url}

Please don’t reply to this message.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/en/team/email/invitation.html b/services/brig/deb/opt/brig/templates/en/team/email/invitation.html index 1f52baa0626..1fa19f30821 100644 --- a/services/brig/deb/opt/brig/templates/en/team/email/invitation.html +++ b/services/brig/deb/opt/brig/templates/en/team/email/invitation.html @@ -1 +1 @@ -You have been invited to join a team on ${brand}

${brand_label_url}

Team invitation

${inviter} has invited you to join a team on ${brand}. Click the button below to accept the invitation.

 
Accept invitation
 

If you can’t click the button, copy and paste this link to your browser:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file +You have been invited to join a team on ${brand}

${brand_label_url}

Team invitation

${inviter} has invited you to join a team on ${brand}. Click the button below to accept the invitation.

 
Accept invitation
 

If you can’t click the button, copy and paste this link to your browser:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/en/team/email/new-member-welcome.html b/services/brig/deb/opt/brig/templates/en/team/email/new-member-welcome.html index 968305a26ea..099763d9cd4 100644 --- a/services/brig/deb/opt/brig/templates/en/team/email/new-member-welcome.html +++ b/services/brig/deb/opt/brig/templates/en/team/email/new-member-welcome.html @@ -1 +1 @@ -You joined a team on ${brand}

${brand_label_url}

Welcome to ${team_name}.

You have just joined a team called ${team_name} on ${brand} with ${email}.

 

${brand} combines strong encryption, a rich feature set and ease-of-use in one app like never before. Works on all popular platforms.

 
Download ${brand}
 

If you can’t click the button, copy and paste this link to your browser:

${url}

If you have any questions, please contact us.

Team ID: ${team_id}

                                                           
\ No newline at end of file +You joined a team on ${brand}

${brand_label_url}

Welcome to ${team_name}.

You have just joined a team called ${team_name} on ${brand} with ${email}.

 

${brand} combines strong encryption, a rich feature set and ease-of-use in one app like never before. Works on all popular platforms.

 
Download ${brand}
 

If you can’t click the button, copy and paste this link to your browser:

${url}

If you have any questions, please contact us.

Team ID: ${team_id}

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/en/user/email/activation.html b/services/brig/deb/opt/brig/templates/en/user/email/activation.html index e9213590ee0..0961d65bd5f 100644 --- a/services/brig/deb/opt/brig/templates/en/user/email/activation.html +++ b/services/brig/deb/opt/brig/templates/en/user/email/activation.html @@ -1 +1 @@ -Your ${brand} Account

${brand_label_url}

Verify your email

${email} was used to register on ${brand}.
Click the button to verify your address.

 
Verify
 

If you can’t click the button, copy and paste this link to your browser:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file +Your ${brand} Account

${brand_label_url}

Verify your email

${email} was used to register on ${brand}.
Click the button to verify your address.

 
Verify
 

If you can’t click the button, copy and paste this link to your browser:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/en/user/email/deletion.html b/services/brig/deb/opt/brig/templates/en/user/email/deletion.html index 89e0ea8d067..4b24e12d86e 100644 --- a/services/brig/deb/opt/brig/templates/en/user/email/deletion.html +++ b/services/brig/deb/opt/brig/templates/en/user/email/deletion.html @@ -1 +1 @@ -Delete account?

${brand_label_url}

Delete your account

We’ve received a request to delete your ${brand} account. Click the button below within 10 minutes to delete all your conversations, content and connections.

 
Delete account
 

If you can’t click the button, copy and paste this link to your browser:

${url}

If you didn’t request this, reset your password.

If you have any questions, please contact us.

                                                           
\ No newline at end of file +Delete account?

${brand_label_url}

Delete your account

We’ve received a request to delete your ${brand} account. Click the button below within 10 minutes to delete all your conversations, content and connections.

 
Delete account
 

If you can’t click the button, copy and paste this link to your browser:

${url}

If you didn’t request this, reset your password.

If you have any questions, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/en/user/email/new-client.html b/services/brig/deb/opt/brig/templates/en/user/email/new-client.html index fdd443a4f49..a3e28d92a94 100644 --- a/services/brig/deb/opt/brig/templates/en/user/email/new-client.html +++ b/services/brig/deb/opt/brig/templates/en/user/email/new-client.html @@ -1 +1 @@ -New device

${brand_label_url}

New device

Your ${brand} account was used on:

${date}

${model}

You may have installed ${brand} on a new device or installed it again on an existing one. If that was not the case, go to ${brand} Settings, remove the device and reset your password.

If you have any questions, please contact us.

                                                           
\ No newline at end of file +New device

${brand_label_url}

New device

Your ${brand} account was used on:

${date}

${model}

You may have installed ${brand} on a new device or installed it again on an existing one. If that was not the case, go to ${brand} Settings, remove the device and reset your password.

If you have any questions, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/en/user/email/password-reset.html b/services/brig/deb/opt/brig/templates/en/user/email/password-reset.html index 6203a8b7947..0ee528b68d1 100644 --- a/services/brig/deb/opt/brig/templates/en/user/email/password-reset.html +++ b/services/brig/deb/opt/brig/templates/en/user/email/password-reset.html @@ -1 +1 @@ -Password Change at ${brand}

${brand_label_url}

Reset your password

We’ve received a request to reset the password for your ${brand} account. To create a new password, click the button below.

 
Reset password
 

If you can’t click the button, copy and paste this link to your browser:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file +Password Change at ${brand}

${brand_label_url}

Reset your password

We’ve received a request to reset the password for your ${brand} account. To create a new password, click the button below.

 
Reset password
 

If you can’t click the button, copy and paste this link to your browser:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/en/user/email/team-activation.html b/services/brig/deb/opt/brig/templates/en/user/email/team-activation.html index 10beb757272..b35e702153c 100644 --- a/services/brig/deb/opt/brig/templates/en/user/email/team-activation.html +++ b/services/brig/deb/opt/brig/templates/en/user/email/team-activation.html @@ -1 +1 @@ -${brand} Account

${brand_label_url}

Your new account on ${brand}

A new ${brand} team was created with ${email}. Please verify your email.

 
Verify
 

If you can’t click the button, copy and paste this link to your browser:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file +${brand} Account

${brand_label_url}

Your new account on ${brand}

A new ${brand} team was created with ${email}. Please verify your email.

 
Verify
 

If you can’t click the button, copy and paste this link to your browser:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/en/user/email/update.html b/services/brig/deb/opt/brig/templates/en/user/email/update.html index e6cb7b14d6e..04c3e6d0f34 100644 --- a/services/brig/deb/opt/brig/templates/en/user/email/update.html +++ b/services/brig/deb/opt/brig/templates/en/user/email/update.html @@ -1 +1 @@ -Your new email address on ${brand}

${brand_label_url}

Verify your email

${email} was registered as your new email address on ${brand}. Click the button below to verify your address.

 
Verify
 

If you can’t click the button, copy and paste this link to your browser:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file +Your new email address on ${brand}

${brand_label_url}

Verify your email

${email} was registered as your new email address on ${brand}. Click the button below to verify your address.

 
Verify
 

If you can’t click the button, copy and paste this link to your browser:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/en/user/email/verification.html b/services/brig/deb/opt/brig/templates/en/user/email/verification.html index 62e9e4534dd..cac82605a86 100644 --- a/services/brig/deb/opt/brig/templates/en/user/email/verification.html +++ b/services/brig/deb/opt/brig/templates/en/user/email/verification.html @@ -1 +1 @@ -${brand} verification code is ${code}

${brand_label_url}

Verify your email

${email} was used to register on ${brand}. Enter this code to verify your email and create your account.

 

${code}

 

If you have any questions, please contact us.

                                                           
\ No newline at end of file +${brand} verification code is ${code}

${brand_label_url}

Verify your email

${email} was used to register on ${brand}. Enter this code to verify your email and create your account.

 

${code}

 

If you have any questions, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/et/user/email/activation.html b/services/brig/deb/opt/brig/templates/et/user/email/activation.html index 8d22a627988..8db8a4f3f5e 100644 --- a/services/brig/deb/opt/brig/templates/et/user/email/activation.html +++ b/services/brig/deb/opt/brig/templates/et/user/email/activation.html @@ -1 +1 @@ -Your ${brand} Account

${brand_label_url}

Kinnita oma e-posti aadress

${email} was used to register on ${brand}.
Click the button to verify your address.

 
Kinnita
 

Kui sul pole võimalik nuppu klikkida, siis kopeeri allolev aadress veebibrauserisse:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file +Your ${brand} Account

${brand_label_url}

Kinnita oma e-posti aadress

${email} was used to register on ${brand}.
Click the button to verify your address.

 
Kinnita
 

Kui sul pole võimalik nuppu klikkida, siis kopeeri allolev aadress veebibrauserisse:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/et/user/email/deletion.html b/services/brig/deb/opt/brig/templates/et/user/email/deletion.html index 3fb80587843..aba84eb4d44 100644 --- a/services/brig/deb/opt/brig/templates/et/user/email/deletion.html +++ b/services/brig/deb/opt/brig/templates/et/user/email/deletion.html @@ -1 +1 @@ -Kustuta konto?

${brand_label_url}

Kustuta konto

We’ve received a request to delete your ${brand} account. Kogu kontoga seotud info kustutamise kinnitamiseks kliki kümne minuti jooksul alloleval lingil.

 
Kustuta konto
 

Kui sul pole võimalik nuppu klikkida, siis kopeeri allolev aadress veebibrauserisse:

${url}

If you didn’t request this, reset your password.

If you have any questions, please contact us.

                                                           
\ No newline at end of file +Kustuta konto?

${brand_label_url}

Kustuta konto

We’ve received a request to delete your ${brand} account. Kogu kontoga seotud info kustutamise kinnitamiseks kliki kümne minuti jooksul alloleval lingil.

 
Kustuta konto
 

Kui sul pole võimalik nuppu klikkida, siis kopeeri allolev aadress veebibrauserisse:

${url}

If you didn’t request this, reset your password.

If you have any questions, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/et/user/email/new-client.html b/services/brig/deb/opt/brig/templates/et/user/email/new-client.html index b84a0901e43..2498b1d3633 100644 --- a/services/brig/deb/opt/brig/templates/et/user/email/new-client.html +++ b/services/brig/deb/opt/brig/templates/et/user/email/new-client.html @@ -1 +1 @@ -Sisselogimine uuelt seadmelt

${brand_label_url}

Wire uuel seadmel

Your ${brand} account was used on:

${date}

${model}

You may have installed ${brand} on a new device or installed it again on an existing one. If that was not the case, go to ${brand} Settings, remove the device and reset your password.

If you have any questions, please contact us.

                                                           
\ No newline at end of file +Sisselogimine uuelt seadmelt

${brand_label_url}

Wire uuel seadmel

Your ${brand} account was used on:

${date}

${model}

You may have installed ${brand} on a new device or installed it again on an existing one. If that was not the case, go to ${brand} Settings, remove the device and reset your password.

If you have any questions, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/et/user/email/password-reset.html b/services/brig/deb/opt/brig/templates/et/user/email/password-reset.html index 1b13170a94a..38a4c8d2e08 100644 --- a/services/brig/deb/opt/brig/templates/et/user/email/password-reset.html +++ b/services/brig/deb/opt/brig/templates/et/user/email/password-reset.html @@ -1 +1 @@ -Password Change at ${brand}

${brand_label_url}

Lähtesta oma parool

We’ve received a request to reset the password for your ${brand} account. Uue salasõna loomiseks vajutage järgmisele lingile:

 
Lähesta parool
 

Kui sul pole võimalik nuppu klikkida, siis kopeeri allolev aadress veebibrauserisse:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file +Password Change at ${brand}

${brand_label_url}

Lähtesta oma parool

We’ve received a request to reset the password for your ${brand} account. Uue salasõna loomiseks vajutage järgmisele lingile:

 
Lähesta parool
 

Kui sul pole võimalik nuppu klikkida, siis kopeeri allolev aadress veebibrauserisse:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/et/user/email/team-activation.html b/services/brig/deb/opt/brig/templates/et/user/email/team-activation.html index 59cf5a03e05..f157a4f99ad 100644 --- a/services/brig/deb/opt/brig/templates/et/user/email/team-activation.html +++ b/services/brig/deb/opt/brig/templates/et/user/email/team-activation.html @@ -1 +1 @@ -${brand} Account

${brand_label_url}

Your new account on ${brand}

A new ${brand} team was created with ${email}. Palun kinnita oma meiliaadress.

 
Kinnita
 

Kui sul pole võimalik nuppu klikkida, siis kopeeri allolev aadress veebibrauserisse:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file +${brand} Account

${brand_label_url}

Your new account on ${brand}

A new ${brand} team was created with ${email}. Palun kinnita oma meiliaadress.

 
Kinnita
 

Kui sul pole võimalik nuppu klikkida, siis kopeeri allolev aadress veebibrauserisse:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/et/user/email/update.html b/services/brig/deb/opt/brig/templates/et/user/email/update.html index c60eb119c37..af6c4db2735 100644 --- a/services/brig/deb/opt/brig/templates/et/user/email/update.html +++ b/services/brig/deb/opt/brig/templates/et/user/email/update.html @@ -1 +1 @@ -Your new email address on ${brand}

${brand_label_url}

Kinnita oma e-posti aadress

${email} was registered as your new email address on ${brand}. Aadressi kinnitamiseks kliki alloleval lingil.

 
Kinnita
 

Kui sul pole võimalik nuppu klikkida, siis kopeeri allolev aadress veebibrauserisse:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file +Your new email address on ${brand}

${brand_label_url}

Kinnita oma e-posti aadress

${email} was registered as your new email address on ${brand}. Aadressi kinnitamiseks kliki alloleval lingil.

 
Kinnita
 

Kui sul pole võimalik nuppu klikkida, siis kopeeri allolev aadress veebibrauserisse:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/et/user/email/verification.html b/services/brig/deb/opt/brig/templates/et/user/email/verification.html index 28f72733476..a254dfedc5f 100644 --- a/services/brig/deb/opt/brig/templates/et/user/email/verification.html +++ b/services/brig/deb/opt/brig/templates/et/user/email/verification.html @@ -1 +1 @@ -${brand} verification code is ${code}

${brand_label_url}

Kinnita oma e-posti aadress

${email} was used to register on ${brand}. Konto loomiseks sisestage see kood brauseriaknas.

 

${code}

 

If you have any questions, please contact us.

                                                           
\ No newline at end of file +${brand} verification code is ${code}

${brand_label_url}

Kinnita oma e-posti aadress

${email} was used to register on ${brand}. Konto loomiseks sisestage see kood brauseriaknas.

 

${code}

 

If you have any questions, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/fr/user/email/activation.html b/services/brig/deb/opt/brig/templates/fr/user/email/activation.html index 5a5a63f80fc..9e12b8121d6 100644 --- a/services/brig/deb/opt/brig/templates/fr/user/email/activation.html +++ b/services/brig/deb/opt/brig/templates/fr/user/email/activation.html @@ -1 +1 @@ -Votre Compte ${brand}

${brand_label_url}

Vérification de votre adresse email

${email} a été utilisé pour s'enregistrer sur ${brand}.
Cliquez sur le bouton ci-dessous pour vérifier votre adresse.

 
Vérifier
 

Si vous ne pouvez pas cliquer sur le bouton, copiez et collez ce lien dans votre navigateur :

${url}

Si vous avez des questions, veuillez contactez-nous.

                                                           
\ No newline at end of file +Votre Compte ${brand}

${brand_label_url}

Vérification de votre adresse email

${email} a été utilisé pour s'enregistrer sur ${brand}.
Cliquez sur le bouton ci-dessous pour vérifier votre adresse.

 
Vérifier
 

Si vous ne pouvez pas cliquer sur le bouton, copiez et collez ce lien dans votre navigateur :

${url}

Si vous avez des questions, veuillez contactez-nous.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/fr/user/email/deletion.html b/services/brig/deb/opt/brig/templates/fr/user/email/deletion.html index 27ee8750023..bdfa7667793 100644 --- a/services/brig/deb/opt/brig/templates/fr/user/email/deletion.html +++ b/services/brig/deb/opt/brig/templates/fr/user/email/deletion.html @@ -1 +1 @@ -Supprimer votre compte ?

${brand_label_url}

Supprimer votre compte

Nous avons reçu une demande de suppression de votre compte ${brand}. Cliquez sur le lien ci-dessous dans les 10 minutes pour supprimer toutes vos conversations, contenus et connexions.

 
Supprimer le compte
 

Si vous ne pouvez pas cliquer sur le bouton, copiez et collez ce lien dans votre navigateur :

${url}

Si vous n'êtes pas à l'origine de cette demande, réinitialisez votre mot de passe.

Si vous avez des questions, veuillez contactez-nous.

                                                           
\ No newline at end of file +Supprimer votre compte ?

${brand_label_url}

Supprimer votre compte

Nous avons reçu une demande de suppression de votre compte ${brand}. Cliquez sur le lien ci-dessous dans les 10 minutes pour supprimer toutes vos conversations, contenus et connexions.

 
Supprimer le compte
 

Si vous ne pouvez pas cliquer sur le bouton, copiez et collez ce lien dans votre navigateur :

${url}

Si vous n'êtes pas à l'origine de cette demande, réinitialisez votre mot de passe.

Si vous avez des questions, veuillez contactez-nous.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/fr/user/email/new-client.html b/services/brig/deb/opt/brig/templates/fr/user/email/new-client.html index f14f34cd5ca..e742829aff7 100644 --- a/services/brig/deb/opt/brig/templates/fr/user/email/new-client.html +++ b/services/brig/deb/opt/brig/templates/fr/user/email/new-client.html @@ -1 +1 @@ -Nouvel appareil

${brand_label_url}

Nouvel appareil

Votre compte ${brand} a été utilisé sur :

${date}

${model}

Il se peut que vous ayez installé ${brand} sur un nouvel appareil ou réinstallé sur le même. Si ce n'était pas le cas, allez dans les paramètres de ${brand}, retirez cet appareil et réinitialisez votre mot de passe.

Si vous avez des questions, veuillez contactez-nous.

                                                           
\ No newline at end of file +Nouvel appareil

${brand_label_url}

Nouvel appareil

Votre compte ${brand} a été utilisé sur :

${date}

${model}

Il se peut que vous ayez installé ${brand} sur un nouvel appareil ou réinstallé sur le même. Si ce n'était pas le cas, allez dans les paramètres de ${brand}, retirez cet appareil et réinitialisez votre mot de passe.

Si vous avez des questions, veuillez contactez-nous.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/fr/user/email/password-reset.html b/services/brig/deb/opt/brig/templates/fr/user/email/password-reset.html index e1bec18caac..783aa7fabc7 100644 --- a/services/brig/deb/opt/brig/templates/fr/user/email/password-reset.html +++ b/services/brig/deb/opt/brig/templates/fr/user/email/password-reset.html @@ -1 +1 @@ -Réinitialisation du mot de passe ${brand}

${brand_label_url}

Réinitialiser votre mot de passe

Nous avons reçu une demande pour réinitialiser le mot de passe de votre compte ${brand}. Pour créer un nouveau mot de passe, cliquez sur le bouton ci-dessous.

 
Réinitialiser le mot de passe
 

Si vous ne pouvez pas cliquer sur le bouton, copiez et collez ce lien dans votre navigateur :

${url}

Si vous avez des questions, veuillez contactez-nous.

                                                           
\ No newline at end of file +Réinitialisation du mot de passe ${brand}

${brand_label_url}

Réinitialiser votre mot de passe

Nous avons reçu une demande pour réinitialiser le mot de passe de votre compte ${brand}. Pour créer un nouveau mot de passe, cliquez sur le bouton ci-dessous.

 
Réinitialiser le mot de passe
 

Si vous ne pouvez pas cliquer sur le bouton, copiez et collez ce lien dans votre navigateur :

${url}

Si vous avez des questions, veuillez contactez-nous.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/fr/user/email/team-activation.html b/services/brig/deb/opt/brig/templates/fr/user/email/team-activation.html index 8e6bb7de2f6..5b85ceb25df 100644 --- a/services/brig/deb/opt/brig/templates/fr/user/email/team-activation.html +++ b/services/brig/deb/opt/brig/templates/fr/user/email/team-activation.html @@ -1 +1 @@ -Compte ${brand}

${brand_label_url}

Votre nouveau compte ${brand}

Une nouvelle équipé a été créée sur ${brand} avec ${email}. Veuillez vérifier votre adresse email s’il vous plaît.

 
Vérifier
 

Si vous ne pouvez pas cliquer sur le bouton, copiez et collez ce lien dans votre navigateur :

${url}

Si vous avez des questions, veuillez contactez-nous.

                                                           
\ No newline at end of file +Compte ${brand}

${brand_label_url}

Votre nouveau compte ${brand}

Une nouvelle équipé a été créée sur ${brand} avec ${email}. Veuillez vérifier votre adresse email s’il vous plaît.

 
Vérifier
 

Si vous ne pouvez pas cliquer sur le bouton, copiez et collez ce lien dans votre navigateur :

${url}

Si vous avez des questions, veuillez contactez-nous.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/fr/user/email/update.html b/services/brig/deb/opt/brig/templates/fr/user/email/update.html index ebc43c79776..08f7c53574c 100644 --- a/services/brig/deb/opt/brig/templates/fr/user/email/update.html +++ b/services/brig/deb/opt/brig/templates/fr/user/email/update.html @@ -1 +1 @@ -Votre nouvelle adresse e-mail sur ${brand}

${brand_label_url}

Vérification de votre adresse email

${email} a été enregistré comme votre nouvelle adresse email sur ${brand}. Veuillez vérifier votre email s’il vous plaît. Cliquez sur le bouton ci-dessous pour vérifier votre adresse email.

 
Vérifier
 

Si vous ne pouvez pas cliquer sur le bouton, copiez et collez ce lien dans votre navigateur :

${url}

Si vous avez des questions, veuillez contactez-nous.

                                                           
\ No newline at end of file +Votre nouvelle adresse e-mail sur ${brand}

${brand_label_url}

Vérification de votre adresse email

${email} a été enregistré comme votre nouvelle adresse email sur ${brand}. Veuillez vérifier votre email s’il vous plaît. Cliquez sur le bouton ci-dessous pour vérifier votre adresse email.

 
Vérifier
 

Si vous ne pouvez pas cliquer sur le bouton, copiez et collez ce lien dans votre navigateur :

${url}

Si vous avez des questions, veuillez contactez-nous.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/fr/user/email/verification.html b/services/brig/deb/opt/brig/templates/fr/user/email/verification.html index 35323075161..171d39a283f 100644 --- a/services/brig/deb/opt/brig/templates/fr/user/email/verification.html +++ b/services/brig/deb/opt/brig/templates/fr/user/email/verification.html @@ -1 +1 @@ -Votre code de vérification ${brand} est ${code}

${brand_label_url}

Vérification de votre adresse email

L'adresse ${email} a été utilisée pour créer un compte sur ${brand}. Entrez ce code afin de vérifier votre adresse email et créer votre compte.

 

${code}

 

Si vous avez des questions, veuillez contactez-nous.

                                                           
\ No newline at end of file +Votre code de vérification ${brand} est ${code}

${brand_label_url}

Vérification de votre adresse email

L'adresse ${email} a été utilisée pour créer un compte sur ${brand}. Entrez ce code afin de vérifier votre adresse email et créer votre compte.

 

${code}

 

Si vous avez des questions, veuillez contactez-nous.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/lt/user/email/activation.html b/services/brig/deb/opt/brig/templates/lt/user/email/activation.html index 54fe886af10..7941f85b98f 100644 --- a/services/brig/deb/opt/brig/templates/lt/user/email/activation.html +++ b/services/brig/deb/opt/brig/templates/lt/user/email/activation.html @@ -1 +1 @@ -Your ${brand} Account

${brand_label_url}

Patvirtinkite savo el. paštą

${email} was used to register on ${brand}.
Click the button to verify your address.

 
Patvirtinti
 

Jeigu negalite spustelėti ant mygtuko, nukopijuokite ir įdėkite šią nuorodą į savo naršyklę:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file +Your ${brand} Account

${brand_label_url}

Patvirtinkite savo el. paštą

${email} was used to register on ${brand}.
Click the button to verify your address.

 
Patvirtinti
 

Jeigu negalite spustelėti ant mygtuko, nukopijuokite ir įdėkite šią nuorodą į savo naršyklę:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/lt/user/email/deletion.html b/services/brig/deb/opt/brig/templates/lt/user/email/deletion.html index bf753b484b0..57210672518 100644 --- a/services/brig/deb/opt/brig/templates/lt/user/email/deletion.html +++ b/services/brig/deb/opt/brig/templates/lt/user/email/deletion.html @@ -1 +1 @@ -Ištrinti paskyrą?

${brand_label_url}

Ištrinti jūsų paskyrą

Mes gavome užklausą ištrinti jūsų ${brand} paskyrą. Norėdami ištrinti visus savo pokalbius, visą turinį ir ryšius, 10 minučių bėgyje spustelėkite žemiau esantį mygtuką.

 
Ištrinti paskyrą
 

Jeigu negalite spustelėti ant mygtuko, nukopijuokite ir įdėkite šią nuorodą į savo naršyklę:

${url}

Jeigu jūs nebuvote to užklausę, atstatykite savo slaptažodį.

If you have any questions, please contact us.

                                                           
\ No newline at end of file +Ištrinti paskyrą?

${brand_label_url}

Ištrinti jūsų paskyrą

Mes gavome užklausą ištrinti jūsų ${brand} paskyrą. Norėdami ištrinti visus savo pokalbius, visą turinį ir ryšius, 10 minučių bėgyje spustelėkite žemiau esantį mygtuką.

 
Ištrinti paskyrą
 

Jeigu negalite spustelėti ant mygtuko, nukopijuokite ir įdėkite šią nuorodą į savo naršyklę:

${url}

Jeigu jūs nebuvote to užklausę, atstatykite savo slaptažodį.

If you have any questions, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/lt/user/email/new-client.html b/services/brig/deb/opt/brig/templates/lt/user/email/new-client.html index faee7ecf087..c6cee9d4f99 100644 --- a/services/brig/deb/opt/brig/templates/lt/user/email/new-client.html +++ b/services/brig/deb/opt/brig/templates/lt/user/email/new-client.html @@ -1 +1 @@ -Naujas įrenginys

${brand_label_url}

Naujas įrenginys

Your ${brand} account was used on:

${date}

${model}

You may have installed ${brand} on a new device or installed it again on an existing one. If that was not the case, go to ${brand} Settings, remove the device and reset your password.

If you have any questions, please contact us.

                                                           
\ No newline at end of file +Naujas įrenginys

${brand_label_url}

Naujas įrenginys

Your ${brand} account was used on:

${date}

${model}

You may have installed ${brand} on a new device or installed it again on an existing one. If that was not the case, go to ${brand} Settings, remove the device and reset your password.

If you have any questions, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/lt/user/email/password-reset.html b/services/brig/deb/opt/brig/templates/lt/user/email/password-reset.html index 0e2eeff4908..0696825bbdf 100644 --- a/services/brig/deb/opt/brig/templates/lt/user/email/password-reset.html +++ b/services/brig/deb/opt/brig/templates/lt/user/email/password-reset.html @@ -1 +1 @@ -Password Change at ${brand}

${brand_label_url}

Atstatyti jūsų slaptažodį

We’ve received a request to reset the password for your ${brand} account. Norėdami susikurti naują slaptažodį, spustelėkite mygtuką žemiau.

 
Atstatyti slaptažodį
 

Jeigu negalite spustelėti ant mygtuko, nukopijuokite ir įdėkite šią nuorodą į savo naršyklę:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file +Password Change at ${brand}

${brand_label_url}

Atstatyti jūsų slaptažodį

We’ve received a request to reset the password for your ${brand} account. Norėdami susikurti naują slaptažodį, spustelėkite mygtuką žemiau.

 
Atstatyti slaptažodį
 

Jeigu negalite spustelėti ant mygtuko, nukopijuokite ir įdėkite šią nuorodą į savo naršyklę:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/lt/user/email/team-activation.html b/services/brig/deb/opt/brig/templates/lt/user/email/team-activation.html index 9ae8f126881..b5f53f728e6 100644 --- a/services/brig/deb/opt/brig/templates/lt/user/email/team-activation.html +++ b/services/brig/deb/opt/brig/templates/lt/user/email/team-activation.html @@ -1 +1 @@ -${brand} Account

${brand_label_url}

Your new account on ${brand}

A new ${brand} team was created with ${email}. Patvirtinkite savo el. paštą.

 
Patvirtinti
 

Jeigu negalite spustelėti ant mygtuko, nukopijuokite ir įdėkite šią nuorodą į savo naršyklę:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file +${brand} Account

${brand_label_url}

Your new account on ${brand}

A new ${brand} team was created with ${email}. Patvirtinkite savo el. paštą.

 
Patvirtinti
 

Jeigu negalite spustelėti ant mygtuko, nukopijuokite ir įdėkite šią nuorodą į savo naršyklę:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/lt/user/email/update.html b/services/brig/deb/opt/brig/templates/lt/user/email/update.html index 66766f08d87..feb073bc74a 100644 --- a/services/brig/deb/opt/brig/templates/lt/user/email/update.html +++ b/services/brig/deb/opt/brig/templates/lt/user/email/update.html @@ -1 +1 @@ -Your new email address on ${brand}

${brand_label_url}

Patvirtinkite savo el. paštą

${email} was registered as your new email address on ${brand}. Norėdami patvirtinti savo adresą, spustelėkite mygtuką žemiau.

 
Patvirtinti
 

Jeigu negalite spustelėti ant mygtuko, nukopijuokite ir įdėkite šią nuorodą į savo naršyklę:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file +Your new email address on ${brand}

${brand_label_url}

Patvirtinkite savo el. paštą

${email} was registered as your new email address on ${brand}. Norėdami patvirtinti savo adresą, spustelėkite mygtuką žemiau.

 
Patvirtinti
 

Jeigu negalite spustelėti ant mygtuko, nukopijuokite ir įdėkite šią nuorodą į savo naršyklę:

${url}

If you have any questions, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/lt/user/email/verification.html b/services/brig/deb/opt/brig/templates/lt/user/email/verification.html index d4ca03e03ad..98f0ae0bf44 100644 --- a/services/brig/deb/opt/brig/templates/lt/user/email/verification.html +++ b/services/brig/deb/opt/brig/templates/lt/user/email/verification.html @@ -1 +1 @@ -${brand} verification code is ${code}

${brand_label_url}

Patvirtinkite savo el. paštą

${email} was used to register on ${brand}. Norėdami patvirtinti savo el. paštą ir susikurti paskyrą, įveskite šį kodą.

 

${code}

 

If you have any questions, please contact us.

                                                           
\ No newline at end of file +${brand} verification code is ${code}

${brand_label_url}

Patvirtinkite savo el. paštą

${email} was used to register on ${brand}. Norėdami patvirtinti savo el. paštą ir susikurti paskyrą, įveskite šį kodą.

 

${code}

 

If you have any questions, please contact us.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/ru/user/email/activation.html b/services/brig/deb/opt/brig/templates/ru/user/email/activation.html index 02a4e5bcc47..370f23d5f55 100644 --- a/services/brig/deb/opt/brig/templates/ru/user/email/activation.html +++ b/services/brig/deb/opt/brig/templates/ru/user/email/activation.html @@ -1 +1 @@ -Ваша учетная запись ${brand}

${brand_label_url}

Подтвердите ваш email

${email} был использован для регистрации в ${brand}.
Нажмите на кнопку для подтверждения вашего email адреса.

 
Подтвердить
 

Если вы не можете нажать на кнопку, скопируйте и вставьте эту ссылку в свой браузер:

${url}

Если у вас возникли вопросы или нужна помощь, пожалуйста свяжитесь с нами.

                                                           
\ No newline at end of file +Ваша учетная запись ${brand}

${brand_label_url}

Подтвердите ваш email

${email} был использован для регистрации в ${brand}.
Нажмите на кнопку для подтверждения вашего email адреса.

 
Подтвердить
 

Если вы не можете нажать на кнопку, скопируйте и вставьте эту ссылку в свой браузер:

${url}

Если у вас возникли вопросы или нужна помощь, пожалуйста свяжитесь с нами.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/ru/user/email/deletion.html b/services/brig/deb/opt/brig/templates/ru/user/email/deletion.html index 7854d3305d8..d5a7519689d 100644 --- a/services/brig/deb/opt/brig/templates/ru/user/email/deletion.html +++ b/services/brig/deb/opt/brig/templates/ru/user/email/deletion.html @@ -1 +1 @@ -Удалить учетную запись?

${brand_label_url}

Удалить учетную запись

Мы получили запрос на удаление вашего аккаунта ${brand}. Нажмите на кнопку ниже в течение 10 минут для удаления всех ваших разговоров, контента и контактов.

 
Удалить учетную запись
 

Если вы не можете нажать на кнопку, скопируйте и вставьте эту ссылку в свой браузер:

${url}

Если вы не запрашивали удаление вашего аккаунта, то сбросьте ваш пароль.

Если у вас возникли вопросы или нужна помощь, пожалуйста свяжитесь с нами.

                                                           
\ No newline at end of file +Удалить учетную запись?

${brand_label_url}

Удалить учетную запись

Мы получили запрос на удаление вашего аккаунта ${brand}. Нажмите на кнопку ниже в течение 10 минут для удаления всех ваших разговоров, контента и контактов.

 
Удалить учетную запись
 

Если вы не можете нажать на кнопку, скопируйте и вставьте эту ссылку в свой браузер:

${url}

Если вы не запрашивали удаление вашего аккаунта, то сбросьте ваш пароль.

Если у вас возникли вопросы или нужна помощь, пожалуйста свяжитесь с нами.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/ru/user/email/new-client.html b/services/brig/deb/opt/brig/templates/ru/user/email/new-client.html index 6e80b6b5049..2ceb35233d1 100644 --- a/services/brig/deb/opt/brig/templates/ru/user/email/new-client.html +++ b/services/brig/deb/opt/brig/templates/ru/user/email/new-client.html @@ -1 +1 @@ -Новое устройство

${brand_label_url}

Новое устройство

Ваша учетная запись ${brand} использовалась на:

${date}

${model}

Возможно, вы установили ${brand} на новом устройстве или переустановили его на одном из уже используемых ранее. Если это не так, перейдите в настройки ${brand}, удалите это устройство из списка и сбросьте ваш пароль.

Если у вас возникли вопросы или нужна помощь, пожалуйста свяжитесь с нами.

                                                           
\ No newline at end of file +Новое устройство

${brand_label_url}

Новое устройство

Ваша учетная запись ${brand} использовалась на:

${date}

${model}

Возможно, вы установили ${brand} на новом устройстве или переустановили его на одном из уже используемых ранее. Если это не так, перейдите в настройки ${brand}, удалите это устройство из списка и сбросьте ваш пароль.

Если у вас возникли вопросы или нужна помощь, пожалуйста свяжитесь с нами.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/ru/user/email/password-reset.html b/services/brig/deb/opt/brig/templates/ru/user/email/password-reset.html index cc5a85963de..1164f248ac5 100644 --- a/services/brig/deb/opt/brig/templates/ru/user/email/password-reset.html +++ b/services/brig/deb/opt/brig/templates/ru/user/email/password-reset.html @@ -1 +1 @@ -Смена пароля в ${brand}

${brand_label_url}

Сбросить пароль

Мы получили запрос на сброс пароля для вашей учетной записи ${brand}. Чтобы создать новый пароль нажмите на кнопку ниже.

 
Сбросить пароль
 

Если вы не можете нажать на кнопку, скопируйте и вставьте эту ссылку в свой браузер:

${url}

Если у вас возникли вопросы или нужна помощь, пожалуйста свяжитесь с нами.

                                                           
\ No newline at end of file +Смена пароля в ${brand}

${brand_label_url}

Сбросить пароль

Мы получили запрос на сброс пароля для вашей учетной записи ${brand}. Чтобы создать новый пароль нажмите на кнопку ниже.

 
Сбросить пароль
 

Если вы не можете нажать на кнопку, скопируйте и вставьте эту ссылку в свой браузер:

${url}

Если у вас возникли вопросы или нужна помощь, пожалуйста свяжитесь с нами.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/ru/user/email/team-activation.html b/services/brig/deb/opt/brig/templates/ru/user/email/team-activation.html index ad2d97988ed..eaf681c6ec0 100644 --- a/services/brig/deb/opt/brig/templates/ru/user/email/team-activation.html +++ b/services/brig/deb/opt/brig/templates/ru/user/email/team-activation.html @@ -1 +1 @@ -Ваша учетная запись ${brand}

${brand_label_url}

Ваша новая учетная запись ${brand}

В ${brand} была создана новая команда с использованием email адреса ${email}. Подтвердите ваш email адрес.

 
Подтвердить
 

Если вы не можете нажать на кнопку, скопируйте и вставьте эту ссылку в свой браузер:

${url}

Если у вас возникли вопросы или нужна помощь, пожалуйста свяжитесь с нами.

                                                           
\ No newline at end of file +Ваша учетная запись ${brand}

${brand_label_url}

Ваша новая учетная запись ${brand}

В ${brand} была создана новая команда с использованием email адреса ${email}. Подтвердите ваш email адрес.

 
Подтвердить
 

Если вы не можете нажать на кнопку, скопируйте и вставьте эту ссылку в свой браузер:

${url}

Если у вас возникли вопросы или нужна помощь, пожалуйста свяжитесь с нами.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/ru/user/email/update.html b/services/brig/deb/opt/brig/templates/ru/user/email/update.html index db75c172106..f555c4733bd 100644 --- a/services/brig/deb/opt/brig/templates/ru/user/email/update.html +++ b/services/brig/deb/opt/brig/templates/ru/user/email/update.html @@ -1 +1 @@ -Ваш новый email адрес в ${brand}

${brand_label_url}

Подтвердите ваш email адрес

${email} был указан как ваш новый email адрес в ${brand}. Нажмите на кнопку ниже для подтверждения своего адреса.

 
Подтвердить
 

Если вы не можете нажать на кнопку, скопируйте и вставьте эту ссылку в свой браузер:

${url}

Если у вас возникли вопросы или нужна помощь, пожалуйста свяжитесь с нами.

                                                           
\ No newline at end of file +Ваш новый email адрес в ${brand}

${brand_label_url}

Подтвердите ваш email адрес

${email} был указан как ваш новый email адрес в ${brand}. Нажмите на кнопку ниже для подтверждения своего адреса.

 
Подтвердить
 

Если вы не можете нажать на кнопку, скопируйте и вставьте эту ссылку в свой браузер:

${url}

Если у вас возникли вопросы или нужна помощь, пожалуйста свяжитесь с нами.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/ru/user/email/verification.html b/services/brig/deb/opt/brig/templates/ru/user/email/verification.html index 724704c0d68..f96598cf569 100644 --- a/services/brig/deb/opt/brig/templates/ru/user/email/verification.html +++ b/services/brig/deb/opt/brig/templates/ru/user/email/verification.html @@ -1 +1 @@ -Код подтверждения ${brand} - ${code}

${brand_label_url}

Подтвердите ваш email

${email} был использован для регистрации в ${brand}. Введите этот код для подтверждения email и создания учетной записи.

 

${code}

 

Если у вас возникли вопросы или нужна помощь, пожалуйста свяжитесь с нами.

                                                           
\ No newline at end of file +Код подтверждения ${brand} - ${code}

${brand_label_url}

Подтвердите ваш email

${email} был использован для регистрации в ${brand}. Введите этот код для подтверждения email и создания учетной записи.

 

${code}

 

Если у вас возникли вопросы или нужна помощь, пожалуйста свяжитесь с нами.

                                                           
\ No newline at end of file diff --git a/services/brig/deb/opt/brig/templates/version b/services/brig/deb/opt/brig/templates/version index e6d41e92977..a51152c9bcc 100644 --- a/services/brig/deb/opt/brig/templates/version +++ b/services/brig/deb/opt/brig/templates/version @@ -1 +1 @@ -v1.0.55 +v1.0.56 From 74188720da475df6a984caa4306bbb58a9495100 Mon Sep 17 00:00:00 2001 From: fisx Date: Tue, 5 Mar 2019 17:43:56 +0100 Subject: [PATCH 02/23] Fix: empty objects `{}` are valid TeamMemberDeleteData. (#652) --- .../test/unit/Test/Brig/Types/Common.hs | 18 ++++++++++++++++++ libs/galley-types/src/Galley/Types/Teams.hs | 2 +- 2 files changed, 19 insertions(+), 1 deletion(-) diff --git a/libs/brig-types/test/unit/Test/Brig/Types/Common.hs b/libs/brig-types/test/unit/Test/Brig/Types/Common.hs index 8838930445f..a7c04ec2841 100644 --- a/libs/brig-types/test/unit/Test/Brig/Types/Common.hs +++ b/libs/brig-types/test/unit/Test/Brig/Types/Common.hs @@ -4,16 +4,21 @@ {-# OPTIONS_GHC -Wno-orphans #-} +-- | This is where currently all the json roundtrip tests happen for brig-types and +-- galley-types. module Test.Brig.Types.Common where import Imports import Brig.Types.Common +import Control.Lens import Data.Aeson import Data.Aeson.Types import Data.Proxy import Data.Typeable (typeOf) +import Galley.Types.Teams import Test.Brig.Types.Arbitrary () import Test.Tasty +import Test.Tasty.HUnit import Test.Tasty.QuickCheck @@ -30,6 +35,9 @@ tests = testGroup "Common (types vs. aeson)" , run @Asset Proxy , run @ExcludedPrefix Proxy , run @ManagedBy Proxy + , run @TeamMemberDeleteData Proxy + , testCase "{} is a valid TeamMemberDeleteData" $ do + assertEqual "{}" (Right $ newTeamMemberDeleteData Nothing) (eitherDecode "{}") ] where run :: forall a. (Arbitrary a, Typeable a, ToJSON a, FromJSON a, Eq a, Show a) @@ -39,3 +47,13 @@ tests = testGroup "Common (types vs. aeson)" msg = show $ typeOf (undefined :: a) trip (v :: a) = counterexample (show $ toJSON v) $ Right v === (parseEither parseJSON . toJSON) v + + +instance Arbitrary TeamMemberDeleteData where + arbitrary = newTeamMemberDeleteData <$> arbitrary + +instance Eq TeamMemberDeleteData where + a == b = a ^. tmdAuthPassword == b ^. tmdAuthPassword + +instance Show TeamMemberDeleteData where + show a = "(TeamMemberDeleteData " <> show (a ^. tmdAuthPassword) <> ")" diff --git a/libs/galley-types/src/Galley/Types/Teams.hs b/libs/galley-types/src/Galley/Types/Teams.hs index e45941c6a25..352191e90c2 100644 --- a/libs/galley-types/src/Galley/Types/Teams.hs +++ b/libs/galley-types/src/Galley/Types/Teams.hs @@ -720,7 +720,7 @@ instance FromJSON TeamUpdateData where instance FromJSON TeamMemberDeleteData where parseJSON = withObject "team-member-delete-data" $ \o -> - TeamMemberDeleteData <$> o .: "password" + TeamMemberDeleteData <$> (o .:? "password") instance ToJSON TeamMemberDeleteData where toJSON tmd = object From e5ebf18b36c63d5dbec14b6dc9294f8e4f12973e Mon Sep 17 00:00:00 2001 From: Artyom Kazak Date: Wed, 6 Mar 2019 00:56:10 +0200 Subject: [PATCH 03/23] Require reauthentication when creating a SCIM token (#639) * Move instance Arbitrary PlainTextPassword * Require reauthentication when creating a SCIM token * Fix the schema * Add tests * CI --- .../test/unit/Test/Brig/Types/Arbitrary.hs | 19 ----- libs/types-common/src/Data/Misc.hs | 9 +++ libs/types-common/src/Data/Range.hs | 38 +++++++++- services/spar/src/Spar/Error.hs | 4 +- services/spar/src/Spar/Intra/Brig.hs | 21 +++++- services/spar/src/Spar/Scim/Auth.hs | 1 + services/spar/src/Spar/Scim/Swagger.hs | 1 + services/spar/src/Spar/Scim/Types.hs | 17 ++++- services/spar/src/Spar/Scim/User.hs | 3 + .../Test/Spar/Scim/AuthSpec.hs | 73 +++++++++++++++---- services/spar/test/Arbitrary.hs | 2 +- 11 files changed, 145 insertions(+), 43 deletions(-) diff --git a/libs/brig-types/test/unit/Test/Brig/Types/Arbitrary.hs b/libs/brig-types/test/unit/Test/Brig/Types/Arbitrary.hs index e80b94bec7f..c02f4c9e243 100644 --- a/libs/brig-types/test/unit/Test/Brig/Types/Arbitrary.hs +++ b/libs/brig-types/test/unit/Test/Brig/Types/Arbitrary.hs @@ -180,9 +180,6 @@ instance Arbitrary PasswordResetIdentity where instance Arbitrary AsciiBase64Url where arbitrary = encodeBase64Url <$> arbitrary -instance Arbitrary PlainTextPassword where - arbitrary = PlainTextPassword . fromRange <$> genRangeText @6 @1024 arbitrary - instance Arbitrary ReAuthUser where arbitrary = ReAuthUser <$> arbitrary @@ -410,22 +407,6 @@ arbitraryIntegral :: forall n m i. => Gen (Range n m i) arbitraryIntegral = unsafeRange @i @n @m <$> choose (fromKnownNat (Proxy @n), fromKnownNat (Proxy @m)) -genRangeList :: forall (n :: Nat) (m :: Nat) (a :: *). - (Show a, KnownNat n, KnownNat m, LTE n m) - => Gen a -> Gen (Range n m [a]) -genRangeList = genRange id - -genRangeText :: forall (n :: Nat) (m :: Nat). (KnownNat n, KnownNat m, LTE n m) - => Gen Char -> Gen (Range n m ST.Text) -genRangeText = genRange ST.pack - -genRange :: forall (n :: Nat) (m :: Nat) (a :: *) (b :: *). - (Show b, Bounds b, KnownNat n, KnownNat m, LTE n m) - => ([a] -> b) -> Gen a -> Gen (Range n m b) -genRange pack gc = unsafeRange @b @n @m . pack <$> grange (fromKnownNat (Proxy @n)) (fromKnownNat (Proxy @m)) gc - where - grange mi ma gelem = (`replicateM` gelem) =<< choose (mi, ma) - fromKnownNat :: forall (k :: Nat) (i :: *). (Num i, KnownNat k) => Proxy k -> i fromKnownNat p = fromIntegral $ natVal p diff --git a/libs/types-common/src/Data/Misc.hs b/libs/types-common/src/Data/Misc.hs index 7a3940606d0..1c8036306c3 100644 --- a/libs/types-common/src/Data/Misc.hs +++ b/libs/types-common/src/Data/Misc.hs @@ -51,6 +51,9 @@ import Data.Text.Encoding (decodeUtf8, encodeUtf8) import Data.ByteString.Lazy (toStrict) import Database.CQL.Protocol hiding (unpack) #endif +#ifdef WITH_ARBITRARY +import Test.QuickCheck (Arbitrary(..)) +#endif import Text.Read (Read (..)) import URI.ByteString hiding (Port) @@ -259,6 +262,12 @@ instance FromJSON PlainTextPassword where parseJSON x = PlainTextPassword . fromRange <$> (parseJSON x :: Json.Parser (Range 6 1024 Text)) +#ifdef WITH_ARBITRARY +instance Arbitrary PlainTextPassword where + -- TODO: why 6..1024? For tests we might want invalid passwords as well, e.g. 3 chars + arbitrary = PlainTextPassword . fromRange <$> genRangeText @6 @1024 arbitrary +#endif + ---------------------------------------------------------------------- -- Functor diff --git a/libs/types-common/src/Data/Range.hs b/libs/types-common/src/Data/Range.hs index 24315ef455b..052d9cf93f8 100644 --- a/libs/types-common/src/Data/Range.hs +++ b/libs/types-common/src/Data/Range.hs @@ -22,6 +22,13 @@ module Data.Range , rinc , rappend , rsingleton + +#ifdef WITH_ARBITRARY + -- * 'Arbitrary' generators + , genRangeList + , genRangeText + , genRange +#endif ) where import Imports @@ -40,6 +47,9 @@ import Data.Text.Ascii (AsciiText) import Database.CQL.Protocol hiding (Set, Map) #endif import Numeric.Natural +#ifdef WITH_ARBITRARY +import Test.QuickCheck (Gen, choose) +#endif import qualified Data.Attoparsec.ByteString as Atto import qualified Data.ByteString as B @@ -104,7 +114,7 @@ errorMsg n m = showString "outside range [" checkedEitherMsg :: forall a n m. Within a n m => String -> a -> Either String (Range n m a) checkedEitherMsg msg x = do - let sn = sing :: SNat n + let sn = sing :: SNat n sm = sing :: SNat m case mk x sn sm of Nothing -> Left $ showString msg . showString ": " . errorMsg (fromSing sn) (fromSing sm) $ "" @@ -236,3 +246,29 @@ instance (Within a n m, FromByteString a) => FromByteString (Range n m a) where instance ToByteString a => ToByteString (Range n m a) where builder = builder . fromRange + +#ifdef WITH_ARBITRARY + +---------------------------------------------------------------------------- +-- Arbitrary generators + +genRangeList :: forall (n :: Nat) (m :: Nat) (a :: *). + (Show a, KnownNat n, KnownNat m, LTE n m) + => Gen a -> Gen (Range n m [a]) +genRangeList = genRange id + +genRangeText :: forall (n :: Nat) (m :: Nat). (KnownNat n, KnownNat m, LTE n m) + => Gen Char -> Gen (Range n m Text) +genRangeText = genRange fromString + +genRange :: forall (n :: Nat) (m :: Nat) (a :: *) (b :: *). + (Show b, Bounds b, KnownNat n, KnownNat m, LTE n m) + => ([a] -> b) -> Gen a -> Gen (Range n m b) +genRange pack_ gc = unsafeRange @b @n @m . pack_ + <$> grange (fromIntegral (natVal (Proxy @n))) + (fromIntegral (natVal (Proxy @m))) + gc + where + grange mi ma gelem = (`replicateM` gelem) =<< choose (mi, ma) + +#endif diff --git a/services/spar/src/Spar/Error.hs b/services/spar/src/Spar/Error.hs index add1cf35e43..c6eed43ce02 100644 --- a/services/spar/src/Spar/Error.hs +++ b/services/spar/src/Spar/Error.hs @@ -52,6 +52,7 @@ data SparCustomError | SparBadUserName LT | SparNoBodyInBrigResponse | SparCouldNotParseBrigResponse LT + | SparReAuthRequired | SparBrigError LT | SparBrigErrorWith Status LT | SparNoBodyInGalleyResponse @@ -96,6 +97,7 @@ sparToWaiError (SAML.CustomError (SparBadUserName msg)) = Righ -- Brig-specific errors sparToWaiError (SAML.CustomError SparNoBodyInBrigResponse) = Right $ Wai.Error status502 "bad-upstream" "Failed to get a response from an upstream server." sparToWaiError (SAML.CustomError (SparCouldNotParseBrigResponse msg)) = Right $ Wai.Error status502 "bad-upstream" ("Could not parse response body: " <> msg) +sparToWaiError (SAML.CustomError SparReAuthRequired) = Right $ Wai.Error status403 "access-denied" "This operation requires reauthentication." sparToWaiError (SAML.CustomError (SparBrigError msg)) = Right $ Wai.Error status500 "bad-upstream" msg sparToWaiError (SAML.CustomError (SparBrigErrorWith status msg)) = Right $ Wai.Error status "bad-upstream" msg -- Galley-specific errors @@ -118,7 +120,7 @@ sparToWaiError (SAML.BadSamlResponseInvalidSignature msg) = Right $ Wai.Error st sparToWaiError (SAML.CustomError SparNotFound) = Right $ Wai.Error status404 "not-found" "Could not find IdP." sparToWaiError (SAML.CustomError SparMissingZUsr) = Right $ Wai.Error status400 "client-error" "[header] 'Z-User' required" sparToWaiError (SAML.CustomError SparNotInTeam) = Right $ Wai.Error status403 "no-team-member" "Requesting user is not a team member or not a member of this team." -sparToWaiError (SAML.CustomError SparNotTeamOwner) = Right $ Wai.Error status403 "insufficient-permissions" "You need to be team owner to create an IdP." +sparToWaiError (SAML.CustomError SparNotTeamOwner) = Right $ Wai.Error status403 "insufficient-permissions" "You need to be a team owner." sparToWaiError (SAML.CustomError SparInitLoginWithAuth) = Right $ Wai.Error status403 "login-with-auth" "This end-point is only for login, not binding." sparToWaiError (SAML.CustomError SparInitBindWithoutAuth) = Right $ Wai.Error status403 "bind-without-auth" "This end-point is only for binding, not login." sparToWaiError (SAML.CustomError SparBindUserDisappearedFromBrig) = Right $ Wai.Error status404 "bind-user-disappeared" "Your user appears to have been deleted?" diff --git a/services/spar/src/Spar/Intra/Brig.hs b/services/spar/src/Spar/Intra/Brig.hs index d4b5b790060..afaf22d66bb 100644 --- a/services/spar/src/Spar/Intra/Brig.hs +++ b/services/spar/src/Spar/Intra/Brig.hs @@ -17,6 +17,7 @@ import Data.Aeson (FromJSON, eitherDecode') import Data.ByteString.Conversion import Data.Id (Id(Id), UserId, TeamId) import Data.Ix +import Data.Misc (PlainTextPassword) import Data.Range import Data.String.Conversions import Network.HTTP.Types.Method @@ -246,7 +247,6 @@ getUserTeam buid = do usr <- getUser buid pure $ userTeam =<< usr - -- | If user is not in team, throw 'SparNotInTeam'; if user is in team but not owner, throw -- 'SparNotTeamOwner'; otherwise, return. assertIsTeamOwner :: (HasCallStack, MonadSparToBrig m) => UserId -> TeamId -> m () @@ -262,7 +262,7 @@ assertIsTeamOwner buid tid = do -- -- Called by post handler, and by 'authorizeIdP'. getZUsrOwnedTeam :: (HasCallStack, SAML.SP m, MonadSparToBrig m) - => Maybe UserId -> m TeamId + => Maybe UserId -> m TeamId getZUsrOwnedTeam Nothing = throwSpar SparMissingZUsr getZUsrOwnedTeam (Just uid) = do usr <- getUser uid @@ -270,6 +270,23 @@ getZUsrOwnedTeam (Just uid) = do Nothing -> throwSpar SparNotInTeam Just teamid -> teamid <$ assertIsTeamOwner uid teamid +-- | Verify user's password (needed for certain powerful operations). +ensureReAuthorised :: (HasCallStack, MonadSparToBrig m) + => Maybe UserId -> Maybe PlainTextPassword -> m () +ensureReAuthorised Nothing _ = throwSpar SparMissingZUsr +ensureReAuthorised (Just uid) secret = do + resp <- call + $ method GET + . paths ["/i/users", toByteString' uid, "reauthenticate"] + . json (ReAuthUser secret) + if | statusCode resp == 200 + -> pure () + | statusCode resp == 403 + -> throwSpar SparReAuthRequired + | inRange (400, 499) (statusCode resp) + -> throwSpar . SparBrigErrorWith (responseStatus resp) $ "reauthentication failed" + | otherwise + -> throwSpar . SparBrigError . cs $ "reauthentication failed with status " <> show (statusCode resp) -- | Get persistent cookie from brig and redirect user past login process. -- diff --git a/services/spar/src/Spar/Scim/Auth.hs b/services/spar/src/Spar/Scim/Auth.hs index e5276f65b8f..f81511c10da 100644 --- a/services/spar/src/Spar/Scim/Auth.hs +++ b/services/spar/src/Spar/Scim/Auth.hs @@ -73,6 +73,7 @@ createScimToken createScimToken zusr CreateScimToken{..} = do let descr = createScimTokenDescr teamid <- Intra.Brig.getZUsrOwnedTeam zusr + Intra.Brig.ensureReAuthorised zusr createScimTokenPassword tokenNumber <- fmap length $ wrapMonadClient $ Data.getScimTokens teamid maxTokens <- asks (maxScimTokens . sparCtxOpts) unless (tokenNumber < maxTokens) $ diff --git a/services/spar/src/Spar/Scim/Swagger.hs b/services/spar/src/Spar/Scim/Swagger.hs index da2bef95af0..9124da7b576 100644 --- a/services/spar/src/Spar/Scim/Swagger.hs +++ b/services/spar/src/Spar/Scim/Swagger.hs @@ -58,6 +58,7 @@ instance ToSchema CreateScimToken where & type_ .~ SwaggerObject & properties .~ [ ("description", textSchema) + , ("password", textSchema) ] & required .~ [ "description" ] diff --git a/services/spar/src/Spar/Scim/Types.hs b/services/spar/src/Spar/Scim/Types.hs index ce2598c8e54..2776d79abab 100644 --- a/services/spar/src/Spar/Scim/Types.hs +++ b/services/spar/src/Spar/Scim/Types.hs @@ -25,10 +25,12 @@ module Spar.Scim.Types where import Imports import Brig.Types.User as Brig -import Control.Lens hiding ((.=), Strict) +import Control.Lens hiding ((.=), Strict, (#)) import Data.Aeson as Aeson import Data.Aeson.Types as Aeson +import Data.Misc (PlainTextPassword) import Data.Id +import Data.Json.Util ((#)) import Servant import Spar.API.Util import Spar.Types @@ -125,18 +127,24 @@ makeLenses ''ValidScimUser -- | Type used for request parameters to 'APIScimTokenCreate'. data CreateScimToken = CreateScimToken - { createScimTokenDescr :: Text + { -- | Token description (as memory aid for whoever is creating the token) + createScimTokenDescr :: !Text + -- | User password, which we ask for because creating a token is a "powerful" operation + , createScimTokenPassword :: !(Maybe PlainTextPassword) } deriving (Eq, Show) instance FromJSON CreateScimToken where parseJSON = withObject "CreateScimToken" $ \o -> do createScimTokenDescr <- o .: "description" + createScimTokenPassword <- o .:? "password" pure CreateScimToken{..} +-- Used for integration tests instance ToJSON CreateScimToken where toJSON CreateScimToken{..} = object - [ "description" .= createScimTokenDescr - ] + $ "description" .= createScimTokenDescr + # "password" .= createScimTokenPassword + # [] -- | Type used for the response of 'APIScimTokenCreate'. data CreateScimTokenResponse = CreateScimTokenResponse @@ -144,6 +152,7 @@ data CreateScimTokenResponse = CreateScimTokenResponse , createScimTokenResponseInfo :: ScimTokenInfo } deriving (Eq, Show) +-- Used for integration tests instance FromJSON CreateScimTokenResponse where parseJSON = withObject "CreateScimTokenResponse" $ \o -> do createScimTokenResponseToken <- o .: "token" diff --git a/services/spar/src/Spar/Scim/User.hs b/services/spar/src/Spar/Scim/User.hs index 9cec8927499..3f9cf11600f 100644 --- a/services/spar/src/Spar/Scim/User.hs +++ b/services/spar/src/Spar/Scim/User.hs @@ -187,6 +187,9 @@ validateScimUser' idp richInfoLimit user = do handl <- validateHandle (Scim.userName user) mbName <- mapM validateName (Scim.displayName user) richInfo <- validateRichInfo (Scim.extra user ^. sueRichInfo) + + -- NB: We assume that checking that the user does _not_ exist has already been done before; + -- the hscim library check does a 'get' before a 'create'. pure $ ValidScimUser user uref handl mbName richInfo where diff --git a/services/spar/test-integration/Test/Spar/Scim/AuthSpec.hs b/services/spar/test-integration/Test/Spar/Scim/AuthSpec.hs index 671c148646d..4e8366beb13 100644 --- a/services/spar/test-integration/Test/Spar/Scim/AuthSpec.hs +++ b/services/spar/test-integration/Test/Spar/Scim/AuthSpec.hs @@ -9,6 +9,8 @@ import Imports import Bilge import Bilge.Assert import Control.Lens +import Data.Misc (PlainTextPassword(..)) +import Network.Wai.Utilities.Error (label) import Spar.Scim import Spar.Types (ScimTokenInfo(..)) import Util @@ -34,6 +36,11 @@ specCreateToken = describe "POST /auth-tokens" $ do it "respects the token limit" $ testTokenLimit it "requires the team to have an IdP" $ testIdPIsNeeded it "authorizes only team owner" $ testCreateTokenAuthorizesOnlyTeamOwner + it "requires a password" $ testCreateTokenRequiresPassword + -- FUTUREWORK: we should also test that for a password-less user, e.g. for an SSO user, + -- reauthentication is not required. We currently (2019-03-05) can't test that because + -- only team owners with an email address can do SCIM token operations (which is something + -- we should change in the future). -- | Test that token creation is sane: -- @@ -46,7 +53,8 @@ testCreateToken = do (owner, _, _) <- registerTestIdP CreateScimTokenResponse token _ <- createToken owner CreateScimToken - { createScimTokenDescr = "testCreateToken" } + { createScimTokenDescr = "testCreateToken" + , createScimTokenPassword = Just defPassword } -- Try to do @GET /Users@ and check that it succeeds listUsers_ (Just token) Nothing (env ^. teSpar) !!! const 200 === statusCode @@ -60,12 +68,15 @@ testTokenLimit = do -- Create two tokens (owner, _, _) <- registerTestIdP _ <- createToken owner CreateScimToken - { createScimTokenDescr = "testTokenLimit / #1" } + { createScimTokenDescr = "testTokenLimit / #1" + , createScimTokenPassword = Just defPassword } _ <- createToken owner CreateScimToken - { createScimTokenDescr = "testTokenLimit / #2" } + { createScimTokenDescr = "testTokenLimit / #2" + , createScimTokenPassword = Just defPassword } -- Try to create the third token and see that it fails createToken_ owner CreateScimToken - { createScimTokenDescr = "testTokenLimit / #3" } + { createScimTokenDescr = "testTokenLimit / #3" + , createScimTokenPassword = Just defPassword } (env ^. teSpar) !!! const 403 === statusCode @@ -81,16 +92,17 @@ testIdPIsNeeded = do -- Creating a token should fail now createToken_ userid - CreateScimToken { createScimTokenDescr = "testIdPIsNeeded" } + CreateScimToken + { createScimTokenDescr = "testIdPIsNeeded" + , createScimTokenPassword = Just defPassword } (env ^. teSpar) !!! const 400 === statusCode -- | Test that a token can only be created as a team owner testCreateTokenAuthorizesOnlyTeamOwner :: TestSpar () -testCreateTokenAuthorizesOnlyTeamOwner = - do +testCreateTokenAuthorizesOnlyTeamOwner = do env <- ask - (_, teamId,_) <- registerTestIdP + (_, teamId, _) <- registerTestIdP teamMemberId <- runHttpT (env ^. teMgr) $ createTeamMember (env ^. teBrig) @@ -99,13 +111,40 @@ testCreateTokenAuthorizesOnlyTeamOwner = (Galley.rolePermissions Galley.RoleMember) createToken_ teamMemberId - (CreateScimToken - { createScimTokenDescr = "testCreateToken" - }) + CreateScimToken + { createScimTokenDescr = "testCreateToken" + , createScimTokenPassword = Just defPassword } (env ^. teSpar) !!! const 403 === statusCode +-- | Test that for a user with a password, token creation requires reauthentication (i.e. the +-- field @"password"@ should be provided). +-- +-- Checks both the "password not provided" and "wrong password is provided" cases. +testCreateTokenRequiresPassword :: TestSpar () +testCreateTokenRequiresPassword = do + env <- ask + -- Create a new team + (owner, _, _) <- registerTestIdP + -- Creating a token doesn't work without a password + createToken_ + owner + CreateScimToken + { createScimTokenDescr = "testCreateTokenRequiresPassword" + , createScimTokenPassword = Nothing } + (env ^. teSpar) + !!! do const 403 === statusCode + const "access-denied" === (label . decodeBody') + -- Creating a token doesn't work with a wrong password + createToken_ + owner + CreateScimToken + { createScimTokenDescr = "testCreateTokenRequiresPassword" + , createScimTokenPassword = Just (PlainTextPassword "wrong password") } + (env ^. teSpar) + !!! do const 403 === statusCode + const "access-denied" === (label . decodeBody') ---------------------------------------------------------------------------- -- Token listing @@ -121,9 +160,11 @@ testListTokens = do -- Create two tokens (owner, _, _) <- registerTestIdP _ <- createToken owner CreateScimToken - { createScimTokenDescr = "testListTokens / #1" } + { createScimTokenDescr = "testListTokens / #1" + , createScimTokenPassword = Just defPassword } _ <- createToken owner CreateScimToken - { createScimTokenDescr = "testListTokens / #2" } + { createScimTokenDescr = "testListTokens / #2" + , createScimTokenPassword = Just defPassword } -- Check that the token is on the list list <- scimTokenListTokens <$> listTokens owner liftIO $ map stiDescr list `shouldBe` @@ -146,7 +187,8 @@ testDeletedTokensAreUnusable = do (owner, _, _) <- registerTestIdP CreateScimTokenResponse token tokenInfo <- createToken owner CreateScimToken - { createScimTokenDescr = "testDeletedTokensAreUnusable" } + { createScimTokenDescr = "testDeletedTokensAreUnusable" + , createScimTokenPassword = Just defPassword } -- An operation with the token should succeed listUsers_ (Just token) Nothing (env ^. teSpar) !!! const 200 === statusCode @@ -163,7 +205,8 @@ testDeletedTokensAreUnlistable = do (owner, _, _) <- registerTestIdP CreateScimTokenResponse _ tokenInfo <- createToken owner CreateScimToken - { createScimTokenDescr = "testDeletedTokensAreUnlistable" } + { createScimTokenDescr = "testDeletedTokensAreUnlistable" + , createScimTokenPassword = Just defPassword } -- Delete the token deleteToken owner (stiId tokenInfo) -- Check that the token is not on the list diff --git a/services/spar/test/Arbitrary.hs b/services/spar/test/Arbitrary.hs index 8ddf1f033a7..1408d710c9e 100644 --- a/services/spar/test/Arbitrary.hs +++ b/services/spar/test/Arbitrary.hs @@ -34,7 +34,7 @@ instance Arbitrary ScimTokenInfo where <*> arbitrary instance Arbitrary CreateScimToken where - arbitrary = CreateScimToken <$> arbitrary + arbitrary = CreateScimToken <$> arbitrary <*> arbitrary instance Arbitrary CreateScimTokenResponse where arbitrary = CreateScimTokenResponse <$> arbitrary <*> arbitrary From 8ec8b7ce2e5a184233aa9361efa86351c109c134 Mon Sep 17 00:00:00 2001 From: fisx Date: Wed, 6 Mar 2019 18:47:11 +0100 Subject: [PATCH 04/23] Add spar to the arch diagram. (#650) --- docs/developer/architecture/wire-arch-2.png | Bin 65202 -> 73860 bytes docs/developer/architecture/wire-arch-2.xml | 2 +- 2 files changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/developer/architecture/wire-arch-2.png b/docs/developer/architecture/wire-arch-2.png index fec03f2f52a096791525310f0127f25662aa0f5b..c991a43f6317f3feba99a22617979927b5435a47 100644 GIT binary patch literal 73860 zcmeFZbyQaC*EUK@N{2LpbST}afJiqF2qGZeDGiDOA|Xh32uL?bt03Lo2-4jR=YAA? z@%?`980Q;job%85_87AFUaYn5dCz;sHRrsR0gCccw^4~vVPIfxOFw?33M*|5SNf&3huOAoJm@fw3Ab<1t11UM zOFqMVMTsSQ>C+>EwehtDvyBq#NW8ZmPo72;eQ$cs`Fv0)c+R>gy?NT`4F8iO-x-rz zkaNG|GA7lm*ImjJ1%&S+^l0j?92Rs(LJYHB$ZPd_O|mK)Z(n3^t)_*9c`41Z^%`z| zx}+r4be219WQ=eT5T>1QBXTRG4r_UaoJHOE?4-!HpHV0Fa&aET{C<=*2S&ZmSJCtP zJ52P$r5BTA#{zB)i%I^jN@mrH(>^ae9f!;ry&T`!S?}zOO1z=?$P`@>FAG%Aeyw4it$sxBVs5UFHhFHt zMNzD$x-)w43#*(ETkbU*`tGM9M_%%e-_#mDRCfEY*ouEswPRk{nU5=PFcCqP>7!Gv zF>5j?dGV#{Y-72Eo6EQ~O`>H*`CT}R1Hp-?RAYe8SM9doul=$M4;B0~Aq`*jl&5)o zIN#&hmZv5rW;puuxvPr7+!E&+JkNOKqU4rj{dlD|T^5yX+drV6lZ!EOw81&r7+=S< zgM9`CoqE`6iny}EMEvlxB|Brn!#uTtUtA8B{D`5VKcByv2+WirMa`bbJ+b2`%;_no z+hSVCkCR?WYC^T6qKLp6({0nBa1AKtnf}h8#YrqOki_w^*Fpzt#l7EYQr+>S>HEZ+ zjBm3dv3~f`n>Z!~&mVqoI-30cvdpc)N@%^CZe;APf#?)xUM)rQcw{xn?*6$aQHDGg$^bk1Hx@h!Ejpd5P`wh# z2^G1msxBE$gX5}J^ud;y2O@G4?(aqz`l|{b&yyLDFt4d}QJ+knP{L?GCO*uNr-H{U z-p9jW4XtvopU|qEq8gdmAC4l{=qIjyrN2w-Lex4wSy`He#h4~iZTqU6KwH-B!{x6G z#jdV~kDoup&CSd!u-qlvQaLsLhRTxyF(QS3tXB{svufIYddOtX-S|Cfb>cN=gx>pP zJuEU_ou^yyBssF|RreO*ahm8qQslz&OLtPCn%jxbEE|>`KRqt*2xTkD;n|#sO;Mue zoKVdzjx%AI_-Haw?J@VJt?@ihfEPg`-(2+9fXmqrceJd`0@rbGfqws@~<2%`lmZt2xGa8C1vB9Xh{pv^7AMK4x;#iQ4Wd z5T|W`j8_+yKJUsvPe-Pq$E`^kG)r^mu1PzVz`La;)|a~spH=K!`QCni^~C6ta8WV4 zgphM>db?s1)q)Ho`Wx~6AiAhmI=+D_sB^~7?N3hqO`|rq5wsec9C){}$CO{cUFGQ5 z(cvpWe6QBLx*L{0fZdeQx!zEh&6J;S=so=^p|G7$oAdK{dOlKtNHozs)Eu3!1tH;M zaHD(8$W3-8nG%KYz64F@`hu`|^Mtk~p-;$!WZmx!UJej`=FUZI>*^-i{ru5)-IsNd z(^i2LO8}uNVSRdQ*p4EXaJbt(Wf(HAY0nedqZZ+C$D`ZsV?nW50pVuaicXFo37^XH z=DDgVpZv$KdXDQc+tK8{$7NAmOk|JFB+AGGb+ojF=h|rh(7(DaEpT;VWb$1TskF2d%JhYZQ+o2T9L#MMr*^Mx#hv0 z%>4OCRfOXwVlZY|PjYGON_j%>9;Od2HI~QZMQ|UK70tnT0o!-Kjhwx4dOtb4EJfxratM z=o6P(9_0zxjP*}a!tKXCzQ+@GPw}oKh6hcji={)hVx2|n$xcBA*}VwfGCx1J+qM#X zH1b`rq+8teR5GY>*z_w8t2-$f%~;1rX`fN(F6Tb)W^H>%#?jO_a^63&o-$zQ(GumMZo^aUB`+cfnelbk{VEcJS3RufM0ARX z<(#mHMbi!&sA{$O?;v=$C2>2FnDT5Mi1DU>6h2PNt9LyArA|<_hJ2TE6n47!By6gf zoIt_+MEDCo+RwSsFPe5d5eVE2O0C{suoS2bjk?O0<{;De>)mSQaB!5{!VC~Pv^|A< zQtlZMwQq$oJ8F+U?Edm-tDv5-Y?Q{6PoZRyIl*o}TiFp)d2+6&jI7M7tlk^0s z-6Urk9_xLXbv2GAap(R+@gLDCYVrtlf{e7gYa)A2eD7m!1z+4@ zR_>*CHga!YNT=2rgRi4k3}M9;S?PqZ8Yh2wzNs-!fcV~Z5=~asqC&nCpDHp?DA;$f z?fj0o6IhRC3gz7XQxrm~vL z!4<%JsY2CfR-*Ya>9r31vEMYz>%M_EdWR1aR7LW99p*U7?*8OSw6IJk8{2JYpkeH; zw(mw_QyH){Xkb=t3@_tMvi$O}z$472W$b*`Jh8NnctM3z`qjsC$sXfUOu{j>iMJ-V zQ$F723p$1`KYiG_yOlSS!ggH8pZI0(SWxlIi(^9se)Ba-y%(|KPcPWdSdGCjFqAOT zkHl18>aI;9jo>$pUTz0a`jb^_zQWH9VBt>H|LRX{{-{XLZC$^B5~~R(jZJ@8)1QWs zV2wD<K*$Wun>}^?!OC zIW(8{)_+_ZDwG`|B_zR(Q1O2d58M(q*!Z920u9vw(`LjM%yVzLFShX0q}w%}rH^Y620zuPA7}zIs(6P{=`gS{CiiS8>XD8#?x9 zjbdM4e4qlf-g~{RKgZI+2sYsVcQrdKpgMT|>PI)u)<+x&%Ia{a$ai(z$ZFulSwb?^ zH%}m07i@38t%&S;wH|M9h_|*z@NO#X6&@vo%&?-5{_0iqRp6%7L#+LWHy1Pl3tE-) zp8mb)i9-pIyK_?oFmNTPzzC?WMerd-c&xtY-NlOicQ06?$vr?%Xp=&!7*{fg~a1vf1TJvzUZ0D3)n z6f79W(sglt(I*Lrg?sBC7ih=@Hw7SHi$IA46vvlbB6EEKx+;As(2KY;X2g{Wd=$a* zeq$x2XjjYsUSI`mN2h>v{Od8y*NV$9`l90KF!}NJjj7*HKT&N7NJE9C=O$34d?m-RQ0GVKrcO$59hB` zK&cAelXAekezpAXdwpENc7~anovsxl3J-Kb(qN!+Qzm3(@cz)rCuvu&N?rmt1x`I2 zz`VI&EOf!q*|onHkEDT_KfhMV|1t9`mHrZdJW}IOSF_xfy!*VaLqZ z*Xx%T=bK*EmtGf>wcmSE9(H=e!V^0=smW)Z>#o?3nPw85UrGcl_MP;E#2fd;TQfD; zf0z@LtB!TK`0OxYD`+=qZA2PLPFO`Nmn2;K^VFwirQ6fE-czk=+Gc-KLZHA%De1|f zME-@)+e$}#Yl&*FE~+WUrq?+y=Y*3^&y0RpqA&6YezB(On&T~C2i6>dG(vIhd0XKy z)Hx{dHyuM%PaQbbBZTW)LSF;tCGGh);ozx@S}UHdW!~NE5C1HJs;$EK_By~J(25rG z7h-(kyLLb5FM2|~J*m&vGI2a(S#5u~2QdHFFFUb84k9%^cy;{(Xe9Q8=A5*otm`a? zKo4HpApc4Fzn(*;RIGsM_a2aIetUgU)Bx(}y9VYr_k*n9jmi9WII|S_%CQAO8d?{l zQ3lSin;T%Mlc7 zXW?C4Z3F?I|E+FD(6yA%eTD*z;;6+i^fw8Sg1G5~{>1j?O*3U+e=jFJRLQTTa3=tg z+ixm4UkeEhx-H^hL8Fu>G1rfZ;zNTg@=c*A|8E6XCa}9V>%@PLG6l$lQR`>=*IJB@5)Aga2xsVUrXbZp84rV!Ck#kl|#gB5xqQIkR%;8_}vea=lt}Lhg$OegId>p zGiA$>+y5w2kr`IFP&4-;nfq!tSd|S=?K@wLRajf|JFXbko$bh!v`_^`Z_hQywT9Eh z1>n-%nsSZ(ivfOP4OMWMat?oCha4X7V!S;xV05NShie^-r94F9Fl@oHZ)3>hETUbz zh6aDq6@4U79bS$|*0p4hD3g^2acN~uOii1E$&8OLsVS2KZYtYb6au7vtaoEUdNQ5P ztQFdCB~1TlvIc1Kn=>~46-@K#K?v&|e4mYxXKadjQx5}Bd-)fXc}r?w5_7gJ8NL6K zP2z3%Z^D%Xc6zHnf)nA|nsK59U5fd(8sIzCG<05Y;v=Kod_5%=6sV6-dmm8$PPd9S zJ|xyPy%TorEaj1IYejzN3hmPu2Rx?~m%6fkgBs5!;~j?}h#{=m|L9o)M~^s^YX6p_ z*dK%fivWk+^V;{OlYs8bL60^0~I!zq-1*tq0&3GpcL(QJET$C%-w} z#{cWYzt1Kf^}%g_6?xR_tHW^h=j+y;lg`uYRhn(>n%;isMDQwtH?|6ky5-XZBM?FR z2Wnx3fmKA@!aa9Orn9(i+~)#7X_rcp#~*!oi>kp&E~(Qn{z3&fbd*K)hJ^rB{owZF zlxn}Wzv#^qbu091w`bbWjV=FT0lvJu0ecYgKkZl#08E6dClocodIaU>S$A&8giJZ3 z>TN0-Z>M~HwUR(@!(ceIlOer3@jy{;5U$VHR8KnpyIc$) zWGZ+Z1YL=xhIl7MJ0dHT2{^uCg(AZZb46u=2CWTp#=nPt#L)2YmdI(4)|HxmN16&4 z3eApcq!E#SME`A9e|~u1g~(xDoww+7L~)`N0n0JY-Svv;*n+x~Huv35o`uzddhThD z!-N>Gi{nmlG!{dC=Z%RSuf=GC;BN;=|IG}Fp>~&R$Ut~y?eWM*f`{8o1vUDTA>@`* zuB$A&Q*N6?G`Tq}>tkgcI@L~Glhw{VBvzFv5c^K9982YvdcU2}l+en54ptQLs6juO zOs3>d96*+6s4(3B$e_}ZrIgmu@`1Y3kHT}rhKISNgWaf^%$ZZj?SONAyu!M&#t45# zyPm84&G}8M7JUKD^+;D*!P9Dp!mc9b^4P3Bj0HfZQ~vO$E5zn2VrPE!h8ci10(0Lv z@7~MNk%F23YgovXSal#6`A#>NVqNW(o!}0K(R&b5HCH|ic%Aeob-ZKBA5mtKBV^ZW z@K!Awc)WUXei~;rS@lUvq#(lGr6n{9^YovJXa~++??cPqa%%;za<@yMbL;1Kl}@RI zwbT!WWouEnY3F6;odwREVSl2Btg`i4aJ%>FfuRNA-w}WHdeMmy{u{vQ2Vb)&=1LW{T0rBeD zxrF6aT}LVPH_qN3rVKPJV;sy>peR{PbmM+1LWqdBeD`(Wf#R*3>gJRJp`E)qruNFT zyqn>y>y9({mea#K_LloetR&Q1&1%^X{Y1{bI&4nW{^VEwH{gW&flx0vFiLmz)^r^D zZ^)CMRWf6#gL%KXpKUjPUU_OoLlbtt{>-lASD1`C^BjoyT$zckW|OBqZ)I4@K7Kix z4V0^P0{F!*=w^uu<6{I3h9ym-*EnY2o}%xU5bOGLCl2?ct=JVEt9dz|$pKR530wSj zB2JS~m*r1U#8V3Op64B`)oXn=d_j1MzJBOI14-9P`khOXkAaaE9WSh5U)jAGR*L{c z=tUb|$!6VYEclcERx9dRSzhlT^uPH|meNS3{7^OZd=NXzhIP#WUf4{paYb#do?iLP zG5(KV)`8HtyKlUC9eX7xH?gY@r@iVE>b?*5l^847cd#{gs80|X$3IW>IDlwWv-y$Q z`7cjaSCs6SvG^N0=IQ7D6V_$n#nJyRVTnxYIH0spdpKopq-y1`KT>3{9zVhwH6N+n z_4MV>H>`oVpL1&{*auW0>XO%q6NDU}Z^$55eRDv>ze=Y*R4DFyy;2K>-!bfUKMkej zbn`8e?*0l_;GYf=F0EWEZs)%r1^4fHrxsp6jKJy*Tyw=(L7Fp3u?lyAy-@1Yn2+_A1{d7}<|GeiBbLiqkGdvOoC)7x2 zPqZzH-t^*8FD5uA9xsYq#XP((S23hTl(GQ3s!X;!1-rxJ^ni;l$=yy4&c2H;Ge*$GZp9L$ z6$_C@`f%qd7rH?wjimd3!eSer%joRqJ~qj*Lzn3;@}30SJbu z=U9xo2pO8VJ{-M!)4zG~0R)%lV?e*Mb8)PfODTRGG*M5L%Nd?rFiO{+?X-9A9BSyg zeCo`=yz?NLu@eB%5`jffw%+0ncc6)ARP{OC}-{uKbv0rvY41*KAi8kuO9nhxMIRCaCzXfO6Pq2 z_74(CY4e~;Rvir98JLpq7y<)N$w~g&`)}=Z5`dI^qYafMO@L|wpp9Bf#=Jt<3uXk*^ zE^Xn;nD2%TBTo)df0QX2*0vTBAbYH|kgFbcG9UGfl>}3Y+$DX~EGI9PoC$OU<-0I$ z_}7*$fMYeA;>6c#gdhtZPS`K(^($DHEP=2yPMfEpWlB@68*3{31kGZ)t~Q4K#gmm4 zfMrI}DTF8TFZ*Qf$3UU>Z|hh`x=n1jC)Q$gA8Hv+0)M0E?~PYNEyGHP;r5kfz+oM& zm2`KAA(4H|rl~(_y5FJew*Eq%_ouJwsyR*VRK}xXoZ0?N1%9WoU4PDbnP5l7X)#85?Lm65Dazl-q_xUyKmGBXiTG}XXXEoaS)f%(cWfW&=6fAV0 zxtl3+-YR%7Vi2=t(8agNSuySDM*BjA{-&BKXS9HNo(8(APXX)Ie?~F1lnd?~XRh7r zi#gXjc>i{V3aU7~M|!906+F(<9?q;~L=W`5UFIV+;y1O^vad5xtb)k-0I6Ne-Oo}Q z!^$yS!%zFs$@ohn<>K?I_lLCO>MzeXyV}wy|6*021r!s#CMgOJ!^~=J+a`Ru3aiNu zP<~F;pcZDDKqF7OPJzZNOyMIv`<2g`V;Zj=XQkyuKZNyIgdjRl$(*9rHg~44AU|RX zUGzJ#syjBa@0a7@tC%(nm+RPWrkGaV0M20!(B-1Tqc4exy zSnH1TOF&BNQjcHl9gIb)cHW%I&z9o@I`zh%sS6bKSMsh5DH=Al=YkC?9MV2TB6CnI zQ#%%b;{yGV{FsjJ+m#I`qt*jeRWL<5lm1*-TM%eoSILlN18OWd-l*OrHkz>ce!G() zGhVqUo}*i^Ul2gI+ndI9#|s?L&ctsp&x{I1#!nC{H>t?=@hCS8)*Q#;CA|%(gK$;{ z7(qo;_Rgj10&3yH$G>pXD84cy`_eWbvX_#75<>1^gumQbwPNchY#OMu5^`y6N7l4vZhi?!p;i+(9P^LMYx52N3Y6ZSsz2> zY>P9e1Ox6f0H2zz>8FT}4^rgIUIr$xaCr!oF8A(XA!{cix2^y{H-<@B z6KIOWo?3EPj1*q2|5!a!f5(fa6Y}}dKKHoA-0!u)AOsd66^FhJ zd6LcGM>MX2V6QA_p+3LN4I8W~5&%&V>Vw4j25E?3vY{Hfj!mBdBVDb3fL7L~?p@hE z6xDZ*|8m9$={?TS$qzI%xO^z$h%o%x-kW8X@aTz@Eq+a#DyGi>+=O68hCb*gnkdd> z#96WPOuO5^{&Xi?t=RQ&(wPT1qh+NhD=K~4!&`)wDd-5$MxqB{FIF0iErmfr{{ko6 zX3cIe8Q$mKL1yGiJP=Y@xzM!KyAy>vL0N~YC6N6Fv^FOMxfhy2`5vmPH&$wDG}GX- z05Gszy;2NU$zyd;rLGu(62>T)Z^0zX%HlnUVx=@-nQiRrWlG_G#q~WY(q#eI(!6)a zL5DRz(TQ{rFw)#J_4R2GYVhgDkSC!{t8jV#)|$(UQ^z<^K{&6E4W9kP^m>rM@0eXJ zZ{ZK#oQ^$H#{`VcmsWJbZz!#_`4ddDpNw2v7^nmv;UL8eQY|k%j+1tm8r$X+Aoii^ z@;CzxkQlMow_-p{t;qMZiOJRVJf81d&P=jeNb>yt3(HE1r33tSS(LQVt)iGtr4o;1Ys< zBJHhMIxy*R8Tzsj5NMu%wkBu*J1bzI_fCq^*e5llr}nD)3|c6WujY)+9jX~PKiTIH zST9u>)8Y0I3LBK8GxkH~l<+>y&v;N0{t5-_5OEbNTZJArI7Hl03M=9{aVTX~kt;Q? ziz<))Z7)QBYp>JbSZb_y4bf}{9fi-ocYw-0+9f;^Vnozm@8#8>r(Mx;IM{;m?jkhh zLwOMk92=U=G`{_2^SINj@Kjyq{2(H)a5hPi4hZ?AXJlIeR)V?9dP;>;>=&^n3PrN|au>B#) zBg@T*I8@QCT&Ef&%7+_E-&0TrrmdAx)KE9-VkWDTx4F7edDA^+0(vtSMCGyOO^(YsNqSe`Be~& zSA7ji=n;5@QLk&((lyUNKrDG(3?*g0B4Sb2q4l7sLMIa@ zG@#Wb4(Ia#Bm&fuXQpc7aE(IPuRv2L7&Da`4fsp+8?&!%zkVqdVx;iP1Dnf?$NM{| zQN;r(Yvymt(gK!Rs+_A42;i1hsi00&xL+ojRGA72*7WCSO%zybAn~f@u#znT7P2BR zb7;d-4WDu&2d7B95jZHev!|q-F>&a7y+Czlx{%a; zQ6e}wC{@A1j!qzbI{tw>a;7nYtVs9q~7@l%f+nurWF>!5e;B4&#^4f_CUQmle~G z*6(?ApYUb8k66{c6iE=CqCF5FE;F%jqy95*5-kC0GXqQjrdKew&wO>0F(mMT*_(AA zY2VL)-LBX)%jTmCiLKNe<$#?XFTqHw0KY|KHNM zI7Lc3U&nY)`YiGzh6X;SHmi`j-bRnTyD*|8K;^z5CT76evS{_}@VelY8e~P{0!dM* z-%x^H?Se3?#c(O4sE306Xq0Jy*yjgyyAz=}XSkFb=TOxPf)ihWG#Aqe7v7Opu;1}d zPouV^-6k__?q4$jCF|+v;9gwPxnv%87Aj7>RzKD7j%Bd6tRE? zJ$uk}i6wIy`FaLQ#NS!$*hEG9g=Nk552M=sAr26)lK3u;<}AM$CwY{A0r0J(q+8g0 zA<=E)TN6AOSvaELi#G&-W0AkZ&!@elTAC>U0x88UQHez!H#(SIGTvXen%oe>%)eqzYdp*tooQx0kfUpj9KV~ zFiX(GXR}(xOf+#*1VQednBD=^6Tqba=3dOP?Q4 zHXt=F=GMJ1tKjmOr=)4?uEgX4qP>g4`H{Ip{wsvf+Nab z4q_k%L0la`%}T=ubr!07e&}_)$_fbYpq}3A>U+ZZ(7V$5E&J(~n)lrI;Y22<+T#ZO zqSI#SbzT?V=joxZP_d>F*Y9M5v9EfvR0z3Ix6mOc6cbH*UDkd9E!vJa>pD|tSIieW zs@MdGZYRGzs}LSICM=Z%0yN%R3@?m52pjye!(+S{FBm)5p}T`kS+{FzT!73e({PAg zb&{+Pbvvkm13$Ma0r@!Y`S-UmO@z;CAiirzAP3q2fulH{pzb{Ot~|^t$ujE)uqalZ z{0sva*AZ)Cuc$ov2M34r&Un~%;f?(_^KwzsAG2Bu~(4r27VvA$EzsS-JWV1G#iRNG{> zi~SA=a9DTP^nH81DyCe`Ve(Q?K$iS(`1+3&AphR>H}8EF+1#3zO!lxW9vN#IglaHO zwZnONvYgq;)=X-U=(_Sb-mJk2bhQ50qMdN&$i|x3Ex=umEev_)7q+yN)R^q7Kwog? zC^7=o765CpU>ej4G6soHslO4}NDJ7=n~Cbuyf7GB1q)~?JNJ6Y26Q#X%1p%C=YA>m z1gB`pBlug>y~hJcm=DDr2x0_v^(6NlX?ZTa3aC0<0a)qNE;8Z;xRuvKT4n+O$x5D} z+D9OW-GloFv~t7wAiH0lA1-LrpWsvHz%MJr`iL$;rI`9&S`ZJUczPwp6>meUke$y3 zFZW?qunNfC40LbqdW(i50#4{<45)Jl&Oon2Zi=HCI!!hs1dzvxqkQWbF!g#&Q!ytH zY(__aJY{cPL3T|FmqJJ&wZ#qC^cP6VpvLlUXbiyZghON2qm_-u5;Zpg?N(oClw1;$ zJ#7Ql-iQbn0ervx*UNr%WiA%g7>KL19djQP#I!GH>qLfmwT?Tx!C4kbCCGEYdCigU#)E}(- zSgP;6EWp_8&x5KI!7-$t^4+W*xZz+?Y{=2FI80IDe9&y_PyrQI7Xmw**Z|sAFwIE4 zAKpMG!s2CmFYI>cHT@LHGC?)@duk8Jp1dLRzsa7-tgu}gH+xka08O-fG6L^2iJn$YTW+vo^3{qz|8yP+N(F%vg1|Zd z2sq)l$9I83(_j{hNViA9tbgEM!*Mn(t2)Lb?@G*cYEwyCpxc;2Nga9$&G-Midju;Q zmi=dk!uh-hTpf2P3+)AQ=;JD2RQJfpOgREJ z9}6TJQDcPyNxk{QGbphtVHR^51fmxr0F*&NzdA+;bUFmOcj~lsA+H>f2W=Te-38{; z9{`q;nD?&cA3Em>&WyVnAqj~KtLTK=l#i;23W&I}E@>?BhK>k2-L2HGBP7V;8UrkWg8J8RKuhL7`!cExc{_HJ4z;(a99(>X*{uG& zDCqu)@`o82Qh~Y?#eCq+4(R&*@9y#cho-6))c{55PLqxrpy&3Wzu=ijc;g`$#{UMz z{@B#A1-i8>jB!EG384|P4Y@f04j}0h0=++_SAJ^^TD{gnk{-Kwl+zzGJ)=(--~*o@ zl-fdH18naP zFUjZfe#7xQ#lNL1G3<_<=|{~>Y|xqxTEchNX3a<3q|6=;Z2(x+;1(vi1+84{fU-R3 z13}y19iX||r#Sk%MFwEbt|rIFQ9yyH#F}xR7yrQS4(!Wgm2G8Q7QD7|AlUjD$j3HG z_O`bx?M-9xd$LwgaA`|T9M(o`BQGgvL1P1OhvS>y{oxMdfRNEB{vKqC_qFnI{m7x# zPtqFsgP@KfxrbWELOA@&8Eeq|aq=EqYu3tJ`q4Yk{0H0-SFlSOp+5ygU_Nh{Qc) zI~2AopcSqA9E;^uAO$FoLS~7U-F2Ei4IAei4613L7Os!pyx1dP&K7ziSdh?jo{a3{e5qmrDX@k0{j6^N(8aApIFle z!cFLe%}28(y#%-%9Ee3GoUtRqGQjjfdBY92k8Rjwty-6)$ol|$@?_xq;Q>+@}VC_f-qnKojjrMQAZH-;r||~G^_Mz;y7aQ*FD>u^5;(v4u7Js z!6_uR+rKv~+CU8(R!iY30(LP9RMm7;o?PJugDW$f7T<1HzS-aYyq|cLShCcxnN}-4r(dBm|}9* zRyt$jdpa@+E+`83MvN}upXs`tY$_}?;@sLPlnVO%I}!kbe#*>(5_SOni$W8WaufkEbm_(87lA~9SwnO^=p=>aK;fEe)N+63a5ck!eXIk0Gwz>A zwL$Bv+Ecv+jZ{cvi7{MVhm{IwKz|?>ki$X1V6saqdI#HvvZ1LLR^Lzpi-SXeG_d7W z(gKyYH;2z}mxzC(sET+@&QmumlC8(2XkY-1Pmtsu)KtczDJiZo7rCTchTnWqpdCLb zP134#qr_4HiqCi3?yh%I$Ho#|AJ#e!&k*3yJI3|H!Q3 z=CGM^StLudmjPUpxG&oG4zh`1rVe?JO1XE?|zgn#0y=wrIsh zxYQr;I&d!c!^!btQXJUrl;CzF?g1v0Me@6&sUOHfVE?$g&k00CB!W>g0@A>s@vz+d z>CmH`Y1Z|~c#-}mMc~6cU-|||+D6?^a>$$t(D0a>AP;W`<&2OyPZV{==GN@AEu8Ik z^Nd?o=y-5h88E$S;4|Gq#NoJb+zt`BFp+$J=lMqMp)QBx$^eA8r_RHzq?3C*8VuSd zJA@7=F1Uh;sb@aGG^a`>;OnUc)Gs(@D_|bzWZQ%+DTf!kV^eMzAK(oWJxh&r1u!5B z&e2P+h^qxkr+~89p$)_mD5(+J_yg3Z7|{0Y5(LC2r^fx%KvpxWFY^Rx={Pd>2rplJ z=IdIc(T0`V-=2Am84DUeqe}}#c`ra~i>-RSJPxqZcot(Vs^9cc4o!B@wODy3DfKUtX@E<-NP2o$t0G8agBtm2h~3$3T?i~?E8AAo+H)R_LOXZL>x zrJ-d`N@5l$xswAh`^Q`z0l}u`V9XrqBrN;xfMeSMD7EIgUg!IgtHR?42w%~0bRkbD zw1k^v-xD^26CQ!`aej9bKo^j!oweFRZaeSGZb8k?lafXpnUUjMLWtpqy|35}v1~tN zKUsZGE%IK_wCE@p8|r{emSb&eu)I2QBY+d0q&Dik=wm;&{GdB>cp9WRm$)i>a_4?U z1~eA$7nO1Z`R%wzJ@fo>44527+8&@4&kdy$LM+P$`|$A3_#5Rsbk_FZ1JPJYLtOyR zq+_T3zO{LOt(&v`LX2^&j??I;PfoEoj{?#F?Zs<74}gG2+SHQRAdhx*E{YGHfq*jm|)N5lenA@;KMv zU5puK=4hPwjzxQuE>pr_cHpUp<23JpE}l3`~u+ zmDrrtGzFs9-Vo|$5eq^Ig`KcSqP2fL(Vuf=MqQ{UF7{ z&x|}5FvO?UfU!Acr$9*tAP$fF831C$>&XMq-HltH3zU8i$m2=OW1slN5}xvxgY)X zTL7z2Lv-LN4wbm{B63$8h(nfkqP#^ziyEtoG71 zFuS7_nD7aiBq zP!YKS1FJg0L?`f~f_?UhC`K6i}A41oYBrSU%5E&01 zqJxhEihJYxp)44=M^UU$+9s3$+lk7Qt0%IM%<$F$TMZ5x<4blSt4xh!7X@DDIUd#2 z5Up;=`41B7XfSYUTS_Z9q0hDI;;OI9U`Ks{!eT_cOvuc*YRk-x1FHClyK$foU_+j;vw?J!XVn{Iw7)*S7ElUdso2s#}#z*+ELXYN&; z0{;A3iz!VZ*9&9qG|A-eQ`cO8{BStHIJXo?Dw~SL@YtRg&i;VC>1y-SwW!}4G?s0u zBn3|Ej&Kn6`FGotYwaQ`b{UMszo0zUr3tIe*f6Qht-m;Ns&n)3S8H8>)`eY3)!-w_ zIA99jgkzx2y1x22FY``clTrcgdd+T^ldyShZ%P5)DpeNy{yj~Fw-GI#AP^aiAA*Ew z{ku#4P6fIDwJ#0_Yp^@Bk;X4g2W*H{n?D;w?(9nd*^^fJhD^6cSNtPOrGlsP>+bc_ z$PCcK4%vJ{a{>@Wz8&77oNZA>-xy#eHvsl)9|7Bh)-Hg zeFd8;9Y?CePlOaV)KcwMSHAnzA9q@J0F+~u)WrzN>+L={N?rI-u(#cm&Mb%4S7H6k z?Wb&5*OT{5t$AoBtM{B&`IiElpc9wX{A#NcJ;s^XyN&Oe)R~PUz-I9wA5}tNwlzYC z$0=vzy{8E_n6;zul#6);KCQ_t@Q2shE|Y_bPI3J6!~{sK>jR{*#46O;Tr;b$F@>}4 zJPy2HS4l=x3%*l{ff9l;lRU@HPKN9A`I8)#^_E*!3hFP$mhz0{(8eBrEB&IBInmKj4_?oiY4B8_jf z?R-d6ZWw7`K2@OI3O41%ic-5ZNmtVGX!R6FJWtyBMUEqa|it$Eizx;vo0W{_#`I_pDK;g~wTlGGz*WaNN z$2f{yPUEsRxeUa?8BRHt9k0tnuLC#byDTYFQVeuy3_ANl5B8H*o8;ly0lRz|{ua7P zr3GRpV|MQ#aZFb4ht8+sC~&$EGu;7Tj-9wU)(V#(P_84XgRd3LL{hVQdrwkrzNGI$ zbLwLL(AmH<_w6_5^sg5(Zw z{k*K9z`FaNLg-g`DkZ1P8e$f}-z2u_X}qVc1FymLX%WmXRh_hYo}I5~7}cL2l=QzU z{>7B+hYuauC+LlXYeo`Vb-7bWVLQ^)8Y85^Z$|O2(m;I}LEt`|+3Z+OeCq z6eIdM&N#K>4lE zWy@v2G~lr$EJ}Rn0|@u+Yn5}smXTRVIM4y_s8#hk>4uAu5&*ZZJ9c0YMS@5p;{mX% zYLYg(_OUx|GKrK9UfI%jKUc@c6ygJd1Fnpxz-Y$NK3u4uFe-iV93&wbw*&w?5VYY)@^@T@$t+hn3s47ZZz| zJX7Jdpf)re6gSEAyiZ5{DW^~O~vHZ z1)Tl^%f)g6nwls6IOlv7U+ll z%{%i`+}u2v`}g%TVTsrcTlT)bM&fyT00?vz4=9{Qo^xZTOAm*K#Bn`+`3}zgHXK)w zMx?TX&8?gxJPzS!q%zi|;Z_3ya=YTzU_^M3#90s_|Mx3k z`D;0k=G(@hEZhLP#F3rMW(uzfU0#{Rfj$ZkdG%9L4paT)NXNqHl%^t0nSsK5bJ*GL zGWz)!3^V;g`g};9+J;1A5$y-(p5G5E7x^Df7&FYAZs#4R4BonZN2TvLOoo-=6kFn4 z$C}#QJLclhAVkP$I&A3a03gYf$Z46`L}#p?0b;lj^)sM@b>MYw+Flf3#Htil1uBgL z>USrTmw-JmZYrvEd7%Crn0fxBZol`^>xL)6n54W9+VeY0EuPIux)AKg;S6eV7H)xx ztO&(|C01Wnr8=t~S{5z7l@}?j1eAg?U!^bJ<~sJmg}gb)1#B$+4zpXI^Ygab zCwXaC>BVw;yR2kGBN8+%R{o92l4+i#;0!uzjn9haH0%)KheGzWE4FcWc+c0JaE6kI`Cwjc(m`( z(3;xv{nDn5vQ#qt zE@q`A85;xKu1pYXN%P(%ty-a4fQLuLa$L0`X;$IIWx*M11MWj7>-nEWxRS~;;!Mtc zB;*iX!52a0zYzE8J#^g9$+9n0DHw)RCB6`P;O0ULGo2LE=X)Lnfb=)%a>y&Eo>N$g z*17`u^zsFR<*!Pe?o3?2Zo+0Wp5Xg6Td$5mLeWm#TSf$E^UrS zFj0k6RZ;)~AFO8rRw#TQyq;x}T4_3Bv(Z0h)v-GCd@L z`9vs>fgj4QnwA|ZN_(`gWuEuKi*9u;F!othU+VOHY>4Z6*)aDdz~3F`htt+lqh_3? zo=K~xO05;2+9Fb>r-|J&rZ-|SwWSr$6E)VCd*1^rRI`JeUbe7WLyxsvH%TU}=V5>Ho-tHa1XC zJVRQn(V}{>6d-qhc=WkFHJ%MH(9+UP=7c2VrYyVdpK4QQl$n@K!@}R{I1s1~`|A9{72*zFRd4|M_-PH_g=acSezU5nUXvr9v7sQmMu}EyPiL$ z&>sk&1&yX@+KjT=;4RUw?5(4!Y`g7YN(B)l6qHs%8WoTZ=|)02lx9RzjgDx?>T3jGsYR``^NA;2e9{jU)PGc=A3Ivr@{+R zsVYjRX^CZ$r-1_>s~~sT<|H+maCV`7l-4`Xqnyg&JUIzKuBXMWj+^za7)64Smf`yb zg9!itS7y9LdYB6_yU?nkqdyHfyR~1|{pwlF!PTv9Yf-&WKH{_jy0pQFu#s3&C7Wf& z2nW@R@0SY-$6GCWI4f0poP3+E7Gql?H{3RBc+#tO(j(cIXq>&7l{{u4YRdZC#q& z`l-jo)?d;*)HuZOwFZT+vK5#1*Hr%`y2xC}ZympoGjfR$K3T;9RQlEvMMf0{>W}#+ zgBON3J%5W?8I7CY(&8O9*P!Gb-uEoO^cqX^+s zK)G60S2{SE57k^-X;5KM{~o}zAvK~*Tkjs`)SU?f;=_^{vZ02*BkwUvDjf9r zYdh2?&(Yi1vS>qN;WK3;8qtP|>L<;9zgo|s;YZD5W;5W~H%AWoOZ&cY9nRO-B^mfT z!jKFEnV#m$r53x0X!9IJgGq`SZsrH=+Y0K^d>LMqly9G>!&52kup(Cl&;+8)AkW}R zS7F*9+wUSmZNo0q&LKb=HhC0-qc-kj!z*{k<r2HFy6+8R<9z24=--=~`{v)d$Xx$wbt3kK1<`8X& z+k=Q!8=IP?bD|ol`YBKq<9l^on{QBs_IZG4_y0l^*?;u3%@kUT0yQcr-ahx&jZeDF zT)Ezv_vSa`Chy8`luJ9LvgpUz7S5YDbB!*s>KD&SuPo07H#|W>O7g-vf*$?rUsHO| z{z91sR(7Fcp<;f^VD4DBIY!A{DCRo;miXvzN_9V_ag>5re!L@GhUP0x-TzLV?pyQR z*4!^OLKM6TQoGapKT#0tw;7^#0*~ zRw-eN^va4*Ue_`xXvQMUv>9~0Ra>_2`yUpFAL%ro!!G5le-*er|4;2Rad^GSYXuw! zs(}t!%ha_DG`cZ~$15XfwSkO-;@C@9g52Zb9`@qYp2yRi?r{cQls^_X0cZG>F%l!J%C*Y zb(Ok)cy294*k_ac1ogX_3u9WArqsI3nA5767;|>6b)b}&Hav<8J@cRgJ=D)VGtiQt zth0hfF?aK$1I62ZZv9yCjm^~*`yNQX${^Vu?!prn&$-#CmWMqM8C|7J5&oMW-=c#< z*_N?$%}cy9%rL}Wxo}t3%lD>+*7I-BnpBD{IiwcpsK&+WxOhd1?Ut$gR>yiP75bIN z7LBYZQ@J-SB-xGovg?Gu{~ z-@TtKph3yUg`j|IF;jyF`H&&VRng|YyJOZ9qjB5ArQZ}Q+$+2QYOmbVT*)lTu3eZP z{D~A5E%rB}iXucjr0tqEIy2>TR^$W{|1f%n_UuJrGTQW)KBFct#zz#5{N3{Qv&%rqrZLgS}oQCa0%!aRp5wRQ6KhPedibD??VNZ^7P-cpT<{I7qd&k>3 zzR1y|U3#3pl=H=K@>o^-VyJmF-V5>m^ih>=I3FDPT(-W|dX!~DY9hs)KJYQ@enRb8 zEcV?4C@kVRm{|Ii;@Y$(hL>-Oc7?2r=wa#}<(OTt~erR>3?8 zFVA^e^9^tcw%W_b!_7AXWW|;+2|<^)8p^}~n%k8whT^I~aiPeOmcTX|V^>@qFD9@g z^-eb~Iy5i(+u={?{D1mT(#ZA;10<2Mskm;uS-SW;0oF)DcQ%;ZAi8?Htx8Sv2LJh6 ze>MlKO3@@RVJ}ZPH*}4UQORmLXk(8WsQ{Hh;N%oxA3+GC-r`sAtW zvZd$hOniu*@e$ye=)2Hz@pc0S3RC`-Db!r(HoNi`H>x4hMH$jOb17d-+``CUyH+Xd zQm5f2mg@X{B^|q0B*vmU!+PqMeT{q~u8vM2N9sTinGB#I#m)=6C25-A!(ab-b#l0- zI*`UsmEe-cGf5&G3?1?BiOjRTR609iV}{aNg!SyBi9QX?@pXGmsq@j8S8pu(P$y1A z3xmT9ha{PD3U=>ei_|hzoz`t6m-VevEg@HrjXf(fs_;StqlgeAbyxa%H^jrAc9ZJy zh6*|Yf4`2%^X=@`(<%JS=m1RLF;Bx`-U32;>`AcIidr9VI3U+( zn8|tn&z z^M$g(7aK4V@r=NvYT`PZzV=xVG*awP2GsSz; z|5m-tiV!ZapH+&%6ikEg-U6|?Xe?=tC?eb}eY#U<|A6&!01M$G&O29ACeha=j}NYr zoI-)CNB=;9(xU%Bfj>g-8yGZ7R)bT4yj`p-X!i_r&uUa`Lg7{;^j>R~T&STXxn!?zRS1hW@?k8Qe zhJlUk1Z}$)Ok!mPSiL7K@fD-n(krKs?i{|Xih3-B!#^~=Qp?Q6b z2rdlW(SZj4GbwsYR$nD=K-H<&bh{S8sH6mKv4ae&|Lm0dC4x9`+(a}EA2EVJGe(L6 zLZp$ZPlA8f-~3jf$#yMK+uOIkly*>N0-$&O?JRoFqIARgi~GyEFPb0rW{;3j3T=B) zfKiU}Z6bxGa}@jv_6Sfwuc{*6>kVEjUt5ie6HttWJBA`x1@FLs@u;!M4bKCvD-p|( zWEIrIB0VKJtG96^{_JO=Cv0XOn1n*ZvyeT)s&K}=qDoE2j&B}h`cB*0FD!MiP4ur{ zA1C{D@FSmEKK$1}1ya@#_Q3#hc<`H7i^5+}_Kw|l3mE!zn>mTUrCQGJhSpOW-|Ybh zmcpaG4JYy$1E4foEDn!;-k1wW`WJ3Dh821>6F_ms&`lJ8;-~@kz6Kze`KphzJ==E} zIG+7{irE=iC(&<>XKgsg=x_ykEDKbe4G`NI*5X2QClsmuse_V@N_DT$;`jdV*|lMY zF6zN9;kal__eDC;LUTEB*-hE1Qh)Qh|B0VX~#5oG%?JHBi&+f}v92I{+ zg@@uT<>VjS+HqQe`G}7*XA}I(x2<@pIy&HmdOb*o!MzzyOa<0bJrW#j#h$-+<{zZW z{6~T2C^p{-;|w&=3az8@o>cHch!*cuIgxii;DT!PWBE_vif^%$Jt>&Bx5-a3Vi)ic zwwzkvGzB^&X2kpVf5YqqV)!);T5^!3iH?;RRu3ol|i$yA*DuS7rV7gceX%YMyr~YZDV6jANsBf1jwzS|l{-(^QbOYgpB0^p$4T}GSd%Rhb3JYaGt z+9u7-gqGuWHKJ^01XB66G>ufsoBWa0bUsU$g8Vxd>92}LP*XpACX{%^?^rT7s z7iVjIWySKgrj51X6pS)YQO|p=;_}t?FnezKrTxM$lE3Pdk7-Ac7WJ zm0X$tIm@Gl{X-FL`iWtMW5h_KKFslPQjyLt8>wAo?)tWf%n|ZS;RQM_HkkXm{D^(y zfPxa*JG%Xc9m(fsy73jkpaVNR#e6v?1q*^8QOQ&DWqKmjl&e{sI8wkBuNeZ}#l7@K z*D_$fefGdp@gPPSi$dSS$JNF2^mi!3>CxHAs;%KKd-ENzUc+x?HHLuo7F$;;%^r_& z+SVR@uYF;tuCyiXzY<1&BEz*s$_gVuumC^-FxV1**StU@3F6)mz@0!xL0FL;kTN+^Q9bsIG!i-I%pHR_)+7GSaeoPIL$Vw*FcTB^wtW z%*9E(2Bzw{6j!qv|L5mnvzvjXQfn$({9`T zHjmqqfB9gsvb+ulAd^(s#XcA?sEw=B1yshD^=AlXtAKeF|@5mgCoOCKQQ9!&0rp=lLG72WI$ zG3w&Z_y00N*ph(b) z#otnquV>+nzEP|(22-K<6AHtX94KXmaP2G}-<}VusfU3UqIxx|Aym|{zB)GFGr~t> zoAB{QA?8gDyO8^`wn(Nk&>0Gn64gV=y1g)R)VEZnSjCL}M0{EVm-k0INx}VPK|kUz z)Il`Q0z4=pmatUvn zZ#+&jJv}4@L&y4@)J}uxXM?QP7_%2v|KGTHzCk=BBH7M!_7mhpQ=~; z=Y!dbI2;#0@C>U-vEZvY8xwxz)<>kWYX!`#DRNM94)|tYZ4I_QhX^K~%^rEhvA;7s zw1eF23Avv{M-NHdAHGy6W1R z_!$4Jie)Ku=N<%AMs`+%=svt}0^;>~MBvIIYp85~D`0mgH`#ocytG zobSGDUwuUr%6+{@_3AC=!g`9jkgG~vCJ1N1h3dHGv$$OMTI=2s(e!L{m3rM5+o@YK z(5}X9Hl;#onG~S~Q@|Nz#vBpE$;)vc43);P^TJsQr@^)4sH0+`o;!6@db(~%Vf~)G z&JoPcA}@}v2pxRLY(9QhZwvXO{?5uM!smkl4VCZBV%<`ck_odI6{`BB*^eB`2ir-f zjP0TA+$oCwAc~j7wBxt^jOy27i=IM(ghmG0Jg>dJ=0tfSR@AgPzF;$*XgidfXjj@a z(wCS+m_GIE3A}E}O>HW^VDn#2D&jui$i_ z4VW1|fE-g9fAE=~%GC3c*w@cy&e26`>^ctM6GU~-Aot7NbdO&s`w`OH-Uk4iZcui* zG!;hPv>8ZvwiD3qq72QsG&dJ$va1;LcCvNVEvjuURdeSHpJOX8CND_)h7y9Fw2-9` z8jZ>08M}{k(qV~SV`Vmm5$5GnpUp%laoXMzwy$>294Dqaj4(s>2*mz9S%~b8<% zA4YL&hTVrO+7)oaZukvJmca8L1}7!q-M8BwRZnaE0I1=b&{fiLDr-F}=X8sws$a^( z7;Rm=A-_}{G?=4VVGweUh()-pp+#3HQ8hWx@{?Ke7Z>AH4AV>R_BZ&xuZXSbq&DS=ePe5MeVU9wwIFn;TZ9c z(l;jy2<`aUwMoX!iI0{RfyO!t&ZZ@Kb$x-fEmho2{>`{K0x>N|>mox?8!?w43?4Px zlc^DgTeR{?C?!Ru@IlAaqLE8!RPK!kWOg1gQSo^B0?e8^n>8lAY~eqX2hY(w^4GVF zyFVxp%%4h#7-}uRw-x|nr)X6<1oo8kjbke0uY7ry6s5KamQ_O2E$5HN)J2lh6uT~s zzs7=cv8}U%?!1j!(f3b5UpBoo>Mh&{`2cd@yDt1974)Qt^GRwk20C4EgmHeP_4LWt zN$vU%Ffu`}rCeDoa#Vh^Z|8V?f5Nb+YnfyM?G%a{nA)EP1o!2Yy?L*=6_wd?6N6{K zmDFLZ@?bwGdUto5B0^3*K@C?E70pz*L&SO#F*3|RwV$I=?UhvzUaoRlxGOE-Z<+|i zx3NFQor$^QDsrv>oO588$m~Ja65n&~WU%8d=ZgE%V7rmqfUJk`8w*>p(t;)|;S zjz%n;Km<^_l=|o+!2)C$0dUEwCpuIjJoxSA+r^>0lwzjGQ59-#=({^&=#&WLvB94! z9~_tq?*nr#7R_YsA7Js8zy03wxNJG7N!JiyAc zf=;mSJ5Zf7ug!SfjxuD@nSlKuNfTn?X7b?K~DZ=jcR(XgGcyePOmSV4tXy~V`r zs#9wsp$dS&lj$e3M3um~li7*AQ}f1l<$}NL+G3{#VMP1k+Sdwa>>s7h(n|U2r}Qp^ z&rYFU!v#*WM<;SW;qacT@6(Q4LD>PF4>O){EVk{jyjO8H40wFd_EiWg!u2&o?5kSO z(cAMeaO#739ltA#>n1M#W`@dP`AY8bFMdb98`Um1so36f7R&3bWV9jPYnx4FRh^ni zJ1I{eXsGKQZZDb3>+%3|W!K%`io5yjT`oCbafIsZxck1k`gYmCxVeTO*$R*h6xha9 zIe2M{iPm(yPkuefBD4)t?!7hC&{F=$74H#egL8K5Azp9`NIT94)Zpzqe7_@Zrn1{` zxwn?Dgm>qllWgx+mDWWkzQ1K>Gg#;~iV);Rc1UIw8d4A8FF?lZqmY4+0Nd*BG3s7LWf8uGOFZ|Q1HU$yMe0oG!Y zy35?5X`1CqjwC5Cbb7;J|yE@80vlZt`^n z$KtPZ2;$-(QTEjfuJJ3N{Ggx8qrmJ+jrf;RNx_w<*@ociI5 zDHWg{>vFIoMwSaOnng1${(!Co#+ze`7=f=3aL*YVc$31bSmR19KU2zy;Sg|b{A!l# zT<)WMuXm-V{gB;ZGrQKk;trW0J6=l8KTD!JO7YYRWi8QOe<#ExFmhtyg#uGq- z4X>$oJG!z#df@Q=_OoM7j*dE>Q7t3u<1kayn#l|d!)`0xbgG|^C%REc4-B%c9@9UA zobP{qT<1^;fZ z!VxZBqIrbS?Ymk1`e&Y4h6D#IN~(K-XPDa2bo$-r_Cv@>ZwMHILyeD}`b2J>!F532 z_ctFPk^oH*O$Zzw)Fk_eg?rJO+K=bc6xDQWyd!&JW7OW?YS-OMc$Xq@0UtHyIFV=` zs;M0GN+SLN#<0n2Uodhyc5dq+%OCo=J|-@OZQX@JKg{&}0>tLVf-rs1#x1Ming-~` ze_Wr@bHqU{0nn>TA$Dcxn=U8snc%CON$#S6J1Hlf#F8`osR$3%D`jR{3p){Xua9)N zb;fYs9JDAvBGOMKs_(1fqE`OaHrS`sWW4n0J=B+>oE5=BGyb?P$)48=x9%3tA9~UA z!BEUI{Q09Lao;1@Oz*-G+ZdyF%KVKKDLBb;bJnquX;zToMsQg)7e z#aA;+_V$M1atgAXYR6EdGmqOZZ~=v!jQ`KHH|bvWj)2`4lHhr9k$4ZC(3KJT1(*Uk zf;(n-h`8?|B7)H7;NEXJq7vxdsEMmZywI+Sc~v1coV`GcP1(yDy2uFuE7GHx?2I)X z(!0O7T*{fFwR4K{3j2<98~B5`F4a9uL;O_uLxXc%qcHk~$$LnsTt^S5yv-N1NYtra zL7t0Ab|vu_Ioazy=c443nB(=XJKSn5U7T>*G&l>70;nDa9$s(cL`G~Y8V0)g8k(H$ z_S<+5*?-|scHn^0*obCGMsy|iJV<%?%Vxf^Rv2T43fPT{z_4&l0kj~z!yS7*~l8BHXYN{=BpD$ z>QUd^={r6Zl9OsHrpIHCeOd!W8c{1&2#S<=8jlO(X-3Ny7l}_C-N=(^CgQ$ z$$)?+rI`to~PFy(EkoNK)jnI8-8g2FO<_xg}OAN%KB6ObkSnECP3J%3)) z6ZAVG6!n1&B8Z3g`^$)k&u|WlWBpXb_u$^03`O`f2ALq$9C^MoNU%>O00n-r5R4vA zc{bvIk6jUM&MTrmvxN)bYsKBH$8itQ=ZRzZ`yc%-(QmuZh1z&_XYf3)wR<+LdL^kO zgCGd}THV=*l&+l${Q52#_p$Ti?;j>3f{{A{-yTU9#DkXT;~eD7WnZ0cnGbnyzW>lO z#An)C67c#_q|+BjK&}F=;r8=Xle!LEMa_3^@ebjKF8?0%3TN8wjHGvhVyq5wUmQcY$uIB zRKXq8cJ$EU!!}i?k#*%X^EG1TEQr&Zvv4OpdLewk2o4)>n_t!EG8I;C_#(s4;TbxF z{dEBLJV%3C!{i{}{H*f;`dD71vPCTQDYF?V>pOL}>i?o{*nu+v(ncncI{4G={@K}K0439|HF z#9{Ex*9XtPQ7Kgj3wU{iNqPlj8q?%~i03R=+m?WsCYc1`q@n>hutvNtsQImWued^c zl#DDa@2_sgufP*eLTMaJkz(}73JA`yNA;ps$Q>3FiMIsb@Z7UJo2>PA&!+*nhTkef0Hs0&#AX9Urq9N+(zGi-q#qX)4>u6ZvrPXv#*iykVpqy+xX<~T zg?qdqK7S2naftSy9_#^o{r+sx59KIRdasQGcb9V#Of}8ve16?U*uZZPz;|~V@$!lg zUS>TSpM{UG!do8G0PGqjTU{}q3a`eZamj#X{vIc2kvkR-8xI|Bo3#a3w9qPTc(WJB z$e0+m@Sxakv?!D~d=c+d<>Cs@Nv(cU=ID+MBNexcC8E6u!Qu`WX;uQgHyLakv%H?u z7Ps;-L@=;*s3Vt1wowq_Bvdj5T?Rhag`1!(u5K*@{6869(X?FGq{SFwjSJL4M>pPg zG8HClr>hJQt8;dxUDOdWyS1cdu6EbTYq1qM@vlcb9syz!HSA~m-2`%mF~3o-BD}5B@efp`V&zouSaB$0MQd;~xwNM~WZj zj{*I4^|`#UK!oU~LUQ&<=GKc6a-Qb>%!7FzzpHBL1JOx%J`M6k?;aBCvgnJ6)1%{A~HO@a|jux3R%0M@ACERN${Oun@(2(^an$VaLsziiIkc_z6 zI>w}c`{9adG8k?r^Y#gTD#=m!R~j7-b7jp8`ksRX=3rbO1H|nnDz0-jB@z zSpoqR1~%7@t|h{m;7Vc0V#c?ColagJ+HYf;t_CV-g}UubWEaCD=kLI5gw}N1vYR{? z2|d-KJ^%h;p%@Q%rb#IqgKjny2Npn=n{(BZmq=SkeQa{{Tv0s)gnm!pD~59H+)^OQ zl9SgwvFpyZqz6l8Mv-*QrR7$J#v+NH{h%j$x1%N*$XSTwc%Lhb6oqKAvY&XrJ(%2R znp{7ba+TPtZ2IE)lrEBPSZvX!@5s}{`ry69;+FUd-wAgE<%#lG#TIn(Eq9<)GWd9v z^F-?7v(JX&1jiEAeJQ!dVFrZ}dPz;zWv`EL_`hW9IrmnsXU zDm~;^`iti046tLZT9Denk23g9)xE-D`c3g|>@QLHLHEsj_0N@XoW+v)_Et&uwq#xR zG{^9@4a5QvN*_Z0MEDSQ@bcO?=zym^8N2W!iwsNSQ$RtGZaY@-3ahOX!HFLJrzDRf zlCw+>KO*)xeCU`m$j+DFCDb<-Z!NhzC3!Xj7Lj~?)4IvP`c}9ruY_!H222GL!0yfZ z**>Sv%7^c)Su!ypjQTV>m=w(Ia>e({&!h`y6j&d#UXt>@#BR8Lfugguo1@g;k9#hz zpRPirVdiH1-vqxHi67s%zg6<7=lF$cMkt?n)Lf;zA7PT%;#~4N&?%8+nKuaqd0M6LWP%eQL482HKIYPj)l_b~!P^wrZfJ+O( zIv)22w#_8elnh3Kxxc%igbIy6k<|rC6<+p|AJ5mRdpUTh^p#C$W(n9#e27XcMO8)r zGQQ1#Z%#+(%2>8rPfBRnJb**8AWQ_Ejk$9^Ifzu;k9ge9txBRCiJ!j5%VcEsaHe*E)d%i5Zk%9WM6-hS#o#+Y7J3YRQiBJ$!H z)@WQe)(?7+=OfEzsulkEeYS>_XK}{nwf0vHrmi&l zJX<51lJwB{Y+(@mo1@$h*y;r=R6Z2RJP+p_T;4;dF0QQEj;5qP)47RgXC<)N1*@1? zeuA21>Jo25Qe{Kg^W1wSZROmnPdNk&>7jn0cB(q267eWn!W8NbiOu^ZP0*6&(0F<> z^)-HEKG9&=-t?ZoD~8;9l?NL1Cm*>*x|sW<4xVK`>I$U_AvOIu5m#k8)na_;?#sW; z;hul1(AZ65!$`BF=^WjY4>XTHU@POxc@T6xY=oR@I@HqR;{lOiTu9o)KMjDiry)hy zc2cww)q;h_AD4z_iyXN*1C&|$68c4|Kk~LANqB7we>YvMQLbZ6Gxg45mJ88$9Xh%G>WJ|{ zTLbyHFD)fSF~P82mPGgPYBqVsXL{%1u8*Z8%OB>+?GwI4wq--<@&;MOVIY8Px`2M~ zjy}~#>++btPpu;m>Bqytp;~w~eIORRQuPhuQg2-gQ{_S30 zLG_`=Fe88gFlN|Wz5aQHsn9gxq+Wz~PkC6-~&woMjqp%I+1vb%9CkCoF@of5JL8sS|9h zQ%s-mx6!Vpq%{6dHZewj?ezF6C0#;y3(+A?=Dr6NU=}d`{4wF;wcu!`J6UZ76b{NZ zRrhuD+vx)yDl*~x6DrQ1WRf(f_O(Xw#~Wd!etveEKk1q`f$)}-@fbPM-*jnMHZT8# z=40*q#NRE6zvS<@hI+Gon=)kg;>P(xtYcXFmRY}0#vK|aH8^aWjJt23{O~=xd5Z&4 z(bD-Kqm!ks^@1L@>dWq#)R9e%`j?M5E;9ZcCx*8(sG6378KFO%cJIFK;Lxn?%OCRL zjK^KpAAuY+tk;81>frQm9z=0s&7;R)3y>N8OdIJf5c4A*3tyLC{L(LCLE97*hBg}y z2A{R8K)}j1X$&yae?*7#{gr2sL2KLj=*{IB_LpuAQFs<8jx@ZU%tm<9Am;N8TYWYA z%%1`W0_OR?heAxsckn(h1RxncNiVW;ocU6c^3PPrLrS!PFN_OrNA}1Cyf;lsZSu5M zKG{@exHK7LW_{PTLb?IQ7Q05g1RHD#p^*hsB#sC7Jw*9uh4`rCQ!oQ{Inl~g^3O(< zecA_OR(a~@$Qpan-O5<~#_p|bF^14Gg({8(aA9v-J}ziGin}5UOx+4(`#AdHYyukM}_H_?f(P)(9@-*avY-N;p{XkM~E z$BPmZ#xfO*!G#L`zj>!oj|lQA0WM00RZRt5Bx6`NT^CeVeDXW>*T~MJnYY!79}&$r z>J#oHl&;V9;B(_kV&RY|yz~zojwK3n>C^s>3|N!35RgOB1A@&n>xH{8zd^& z?Rf5L`W&zLv_z=ojDl>p`=pG=K~DtzKyKiNs87^Tf!pvtn)$#oIeMQV(7~bA9bC%6eWj3PXs5_dZ-4NQ~F>LjViJ-3fb}2;J#w@ z_1p^z+tV|B>A)sKvPw0B-UdM=DtKsw{!o`T9U;Oyrs?6EGl9b7OW@yEhr zAh{lJ&s-j;$t+{fa9^`cKpHp7pP#9_a*4{_(7x8vVV65TwR{ zR#W(6^hZ^Xv|#FcJ3;iJ?SLvj%lOJ$D?`^(_zZ63uTljIXV)g1OR|3CRJqv(Vk;&+ z!kxEXDk7e*)KkSh7h{f_%GO_mwq5AqX{kDQc?W<6Wc)Zqp85C5blVktY<$UdZX8-+ zx3X(Dkq#4j^eSx=F1t3-z@c}9?-xV1ny2!2qUA!@#S)@~w>C?T>il1t-RN7cyRMtNvv0*+)78QjzbM32a;*+!88CFkC(xi;|N%`h2(8<)=nZJlC4zGLk5^SI2mF zyeyri57jA#>u=yO%-~|b?filS#NIczKHZ7w4Ly@r-J9U@S%?zPhw+0l)wIDSZw9P@ z_vF7kY2%o@Kl(E%CRDhp`v)-Ysmh$8X1eJ#CF%^z;UJ4< zAk)50gi$xyS;WBa^W06VyZ`0_DvD9>)tQEF5f-U+%}8fzq!_-OqoQ=!21SE(O{}-V>{0 zb3}%y;X8A9?PgE!O`?WKTs2mYj@DPVU*Y)}%QH`QAV>V>{9=-SZou=dTceuYmh;{G zuD~5kLi*g^G_i@^w-VDY+ppZVP~cvndz|xA!kIir6bW<<1fNG`H_0A}uKIP;?qW_*rwn}P5rrL~O3 zqJq453vA_xb--vL9g!rWdlPvv6H8`TC!H_IY>~GXC_VEoFYPt}7>0sp&28E7HrH2+ zccNJ?D|~Ac!SNjRHToc3&dY5Q`@FJ&lxU{NF_6v^L*764!OTUQr*@bqKL=@k(ZD5Q zM2_(%1I~(k^1VRG0IXYu15#-OH=i!lyXRBmEyK}GI(!mE1*bzY3r|m8mQewT)Su@G zu+I{!Q<3o^pN5siztX^Y_RKwZRi52hd*YE`=fP}FF1-Gx`1s5;B%&SPfK&FT_7(-a zn$rjeKP8#&h?5~)C|%v?81VWb0nT`8(@(m)^4E~Kb|=UzkK^LtT?Shm!NM38l>?IC z0IV4E3naK4IOt2%G!y|L^sW%jzv2f9KmAb^sw#E&a<;L+_zHdOR~HL=)&_$1J{rZI z(1O5*2vpT?eB43SyPJh95JDcfXmeGout!?G`uZ>`1h1YAxdqjDe8r>~sok<4lb&d(yyjqmSt&zvD=D> zvomgy{^^DOfRUQfO>!NM$O#)KPqd_*)V5oy8Pvb_B& zq<1(qFVWQ&BxCh0H$+8Ulf?;p%l2_W=iiU$iLAf7cGUvY1%=b|frT4%gDB+czyFs> z>NHYb&gC!bauai|K@XH{APlFD(?OI-}FuVbs)wOuNIR7izaRVvUgMiwb02`!BS@&Eau z3l`$q+VsK~eh-;%^oXV7!+9uXd|5rNZ$2UTr9u}DY2@E;3_i$>Oyz+Kd|H7-p_g8C z7Q<^V#lQK~3zKpM7Y-wGE+tmvE4x)|J(xVUvLt zFB&e6u;)b}~Hv-e!TjJ7*jAWP;g} zGvGSbNtF-9om={bpH3~HHvdlW!7VBQ7f(2TA2QyRq9M0ZrLmY&p z1bY79L?PGQL|*HN)Afolc8Hxg)VzhCeHY8xi7K{UehNPN(Y@8)qcAP8KiG$SYvNX z!KpVBH~Akg6AO=u`Gczs-?KpMkdAPpG?E81zp}60!u{8c2#~>#ZT>2;oSzl8nol9` zzJ#fFwXOtGX7U|}@vjnyCbs>r`eCi0^y@M?6&bZlNv=3Q#TSWWc94jNC*zMxfs3+N zc>&)TiDe8h5z|jC-A%uvOvdW2^A`q(a@)dy&aZ@}2{@&xzHdECeZ|?OYh!Z}u>1V| zzlwiZL|T+-=fed%VeoB-?B4oH`-{t+(+MuaV$lvvgizIIDw~#CAuJ6NkLzwyU!ahh5o+0z=#56bZj(WVHseZK~*#*Ne+K zH(mOFKq7g|mMR-c!S$WhsZHo9=qBGt6ALkUoO|E)px5i1S3P;l@rI^+i49(rh$Z44|0S+CVsjJ1n~t!j8MGIy*2G* z#!gaqh__vCJUy6ZPEl&jnEJfFo&B-3%5hS-R(Pm+NEfG~% zl*5{=b7og>#||B+i|QfANjgic>b@gGw~UtL6)ePrUcrSoa2XFV3}k%B+vdzgti(Nu zXR+w>AHQGW1hqyS$w8yiM7)Q%{Tzc`i-2>7`e-o=iXJE6o{0;jWBer5A-S}!tbq=F z+JD!Z4_vlo+}M%=tX`EpjLQ5~cNs$KW$X_1(xf43bNHP+?y=V2AN_@q)NLV)D!Lou zzlWNZh_B#a zD3NcLqXq|k&hMs`z%pvMZ>PwkASjQ9^CfcB0&pO*%MkuM1xPY5LW1PQ`Qng@4_Fd0 z#_@*6>wI=>RURRxr(X4pjbJc=_>5IX>}G$g8=h@T*q_ty1SQZP&xp5aRsxvpBe^>G z@{D1h2?GQPW{M&pliX#vhNBWe5U^9H$ERg)r1yl=P31Wj9vL&q@_tCAu!g>;VV%e1 zT=Ab5`%C)5eaWuhluwq!?MwvU^A-DW8P=%0Z<$vWP`pa>Dpm0}VfD9rqZ;8g0uLpr zm=&I4{s~G2Iq{0~11h8@iMxD^lFJ}= ze|;O-Uzq~nsjBn%$|MobHCsNT@uS+a9pWRTlQMSqiS@!x?|47oBk198Y~FJ^o7Adb zY|?fGxttK&@5x|Umu7aH&n|!uttgBEBQdT@7@Xc|w=ES23tRo8=H2Ppv`1dw--5fu zcPqIIQg-_Pp0aNhuE@!~yacsUE`GA%UHBvTuRlCHL uw3E#Uh~3s-4+oPsjx

OP*jY_bV&wu`U+V=@X$5Y-XaHqxC%#xnX)_69js|xYHbuQlk zm8eaJzEu9{Iyr%h7V>ma1U3eY1us?~wpT+UZwks(X=Q>7EPLtyc7X*+aD(+}6{O8s z;ChK+r!-9o+&ZfRdm`7R%gLiUmHi<%hyrLgE61&lZ$l;chDMCk_?1<9;&pqywj1wX z5K<-yxh}jJtF)ew$ob3Da+zEI0T?&67Qtqs-7m;k3sT!vHoD1T!9j5RDHU)A{c@kKN zrXFBQpIVOSnTKnycd2g5W1Y;Gs%FbUMeSRY+#GLYI<&sJqrcZxEzE2;+GIy~Xn$kt zS>btw1QzJ2PLYOSg%68ex2}^_U>N+ZUim*5c(2D4+lefl2c?jf>pk5<4RGiLtrX4k zA>njV;*G8IYJ-d2XOO0s<_!H1 z^d9=&YrH87U*^aPzKrlmFdV#;|8nqrl4-=E?v+5^Xc2#?mFeXgn^_%{gNcheB*I`am`Y?%*c@Vkk_(5>UgdNbr>{Qkt{bf#1^vy?4 z24BCy#>VH9ktEin;k61wh)BrDlA676pci%$#)Rung6c;u^m+>!9HnWZoTc`sDNMdb zjDw>;w1eMT9^`vV7@jX=Ai0RW@ZyP}rqXRZsL`&L#Ge0G^pkzC(3^;?mqZ9x5wIZm z)d4uWl9l?wU;+*K!44P|xTj08;qZ$5_I7#hLni5WV4rX=6(c^OYN}jPYWsNiu;^&c zAT_#rWvcp&Z%vG6>-{s?$}!Np@Y%ZZ;L9-0Y|ULEB^X1INb<@`mgRHDmz#G|2|dUG zy1NzFkr-XiI6guq$H=lz#BuQrvUAr5_?T7$nX>MCEfib^Z-6O1^+z8eZNg=$Oycc5#ZKbz;ICUR&X?5lbj$5M3` zxaXl62zTXphqYlGZ zG9!GO(WcPw=nesqL`oyW=5n{@rQBOo>-9np22wAxi81|yfL+YV9Ax5qUO6sX_u>W0 z(|`b74(iDy1jk$tT!-~%lb&3ExmE)`*jo6|fVHFHW7~-z|B~K<_n(Lu{jHBvEIvNC z99`L95+R-=? z!&#_2FLD+rSC30o3pw{V+BzHb&;$DXH_mtjNWt9hV)^B+YaFhrlDH2f8ytp*H0YSIG;<-;=Q4yvQ{AIA_HL6;y_dQU&VPXo?gDy zG+$d$v%rxYYaup|@+(Qmokvyz=Dxz8!aS@?of_2hF!iBOY}ceCN2abubgAMxbY~o! z-zah1Nq#xF6n%1wmSciQIrBinScX%Z(WC52<@X{e7qe2#c-V)34+-San)|LAXPE_@ z#>Sh_J3<8%5z*aQiDuHRIM&tC zj?_l{p7+j|bYZR(*@pkbiECbvdiR?TK#At1=HC=Kc$7%{XqJ`z8?Cl*W;EW3Gu5fV zHteOhBe9;FwXD)b0!WX17>xA0d_Ds7OpymEH{pGF%!;qD4wYYV&PzQ|#M!QPyA)6< zK->3?*%U5F(?1s^@(D{MjGo;8jYEB^?HV8_m>-9J*`7{MJAs`LWAxa3gCpqA=5$-w z+XE-Y1S%vrN1a9bX4qO!IcRG)qA?&y{;&4HKj2f$%eD*98QsRFD2N(pkB@qTX)B6Q zRUx>PjfAg-qZ&DjqR9U*?@zGMS;xBc<1#H~|=7g&d(XUsES z0EBe7P?c56Jg%B_ezb&oZAU#Xu3lLP@bB5=)Lz1+?Rlhx9b~?skcHP(iXemVcr>aX z#UgbO^?3@x!Z^2ynS!q8nvYaVIvzj^O_`4z@9?~V=BZz*d*QXL%ylIFCEk?O%%=gV zuExiF$!AE_obQ{f>xWDc4oxqGf0}7JCD|NBq?rzVq@Uk#@4BicBoa*vHcA|2GRRU{ z_$f_Gj~2KOu0wU?dwQ&%wn3@d_g_lYX4Qx^1=dlhkIc3{WHxB)!_f~9(O2zVUcIl^ zbT;g*rz7oN!>y6V%InNXM?r#r6Cz|cQuwg^kHW_S2M&5xC>LWNDg{#>!WQt154x5i zpXrg{upQ@uPtz4XLrpDQFDyxHQXsX>USPYX4`9d}PE)0ze`?jacq~H9w>zpNm!<_-$vaIbxkqjWsf`f!pgw_|`^>Ha*fQbkf_ZoK-1ZGkrCqW^PCkOz* zSnmJQe8A%=c#)O83Aon1vJd*j*%N1VRq0}%*}3qawKHIBkjyEqzlRbH-`#oX)Zp}9 zmaX-LKZJ#^thUHCZmpiDR$feYq{{l9mW)d=sUI~c{jZMn9y`0L6m+D}L~`?Gxj60RnN`&w-p5^(aIQA~IaiF(Z#w^%eiNe1wv1H$9e_QO&rx7< zAoLwMhE-BoyYI3Wnc#MO5v`5zp1iUGf0hinclf9G&hme|cTu;_LdkG8n1C_^xpk~Q zhNneCP>bb3uD74z`5uOTL5RN{S2d+7kfK4qIhOdJqTw?aT-3}+;{nJ?!&wj-`~uZR zELWq4pWdQ(lf^*c-}yRoGW*@5QzX&NAh9u)9A8!dDeYdXnG!?m!D{@`pHJ!aaRCz7 zp!&K`D37#_nzX!}p{<=1X!S^^5kKKY0b!2C&=KjhwvfA{l{xi@v9kDM!f1s{2%RL z6v6*v@2#Vzq7wH#u;avKh_xUSnpWy@Oysubzj$<^P2NIC_HB&6VpFIBqwNZ zXOj=s?4^~Om8})w=_PPWa1CNdiMaX#>?rYXoxwM#5rCG`hJ;N!CSI_@Vz?U-HJKo? ztjZxiFY5yrgpWl+hR;I&Mwj(ACZR6XMH3MD`g!3*wK+Dpbw;25*>k6N1VGD|-)gUS zVnFOoWn1x?ep#}xLvdH0dP%O*lKt6l(=xMuvINy&{zY!n(D_ewuC+(!UYCU$sL7Wm zH;@O~+YW+k<9{+rSiJ?$_O~db8fE;I6Ej8VgEE8{DMvw}Cim^k77MR~NqO1H?T##7 z@xsVV9{n535_);QYI|ldrn(Kf@3G2;xS*aZub>`M20!O|n9jQmDsV%`#s8mnT!H}) zCIMwy1^R771adLmBWM#=0X}U(ODeDgR&fnA>zd$;lDPWf!|lW0dU~9ib9Z69VCZM$ zcMbm{m$Su)3U?*+1F?@=pl}!dec`?hg_~I&6>hi`9-Hl>o)bDvZWNmZ`Gx)nPu&6y z-tv=~p53k0M0~5UiUQOtFC~Rg2*s(Y>kRG!UtjqIQ)c=_`wZSY8jt;z{vo)o%>wDo zZ|ccEWi~>i?gx1fXPQqH1{CLmneL0iybH9%1^(-1GM(^pDlt*l6qO z>xogy(91M8rVpy937Bx!NSV53JNnQt%bZVmARk^&l9junM^fGMfKIe?Apu?8Perr47w#-e?FSQLNr>R`C)rhKsF?T-GgF6h!-y(N zoauI=Nz|k5+A%-R^8mq}J`?z+=~bt8!eBcYOVNT4B7X31eY+N$hb4ZLxs+}tLPG;o z#L&ntJ@~SqQM8fzZ02Kvp7KXp-f5zf8siUz zr?T+22VBN7BHh7L_9e*%|`(zhj|Q!m18c8TQ5$>#_)(j~mNyOt{eW%~Er?|BXn zaH3=J|56wT?%eW-84S3@?vZQZQcPuH|8MXxI2L=uH<}|Jvwos-WexzJh=8y%}htL~K_*D^#Lj7o0w*6N|^uw)3-3y@SXhp;? zN@y|uK3-)E&S^a2KBun1O7U~K8y&PpFiA2mQtD^)>iHFfbZHVKzcLb=SboXxI>8^- zj>a0{9%%2UD{ue2MOk3=;BTnX%5?!3+T%YuVYp{J?b(d#C0cb35t#kGJD^s8!D__N zZaX%-rp|R!E$ioZ!sS8OG!o$#VoJ9TD%H>~p_zyo*TwJWOSCgK+=U@5DKo*yuV^b=;Gp@h61d zS)IeV9LnY-fLQf1m8e2w9|mE-w4^Uc4N~|cz_nl zi!a|g?Vma&{X~8<<5V0?n3fO6W3|K4>RA--tds7nuc~b+$jgQ&2NnC8k^F0VP;u7V ztp`7)FpZ4D=OUOPS8m=1;z^O0rNZo4X^_l211;mji> z3=K=IQBU%mA}34P2{;zct@sa#P$4S1onE|WA*FvUGdXe@ih@6ntw z9T{{6HidL$!9z?l+sQ%-xTkN9ny(nah79|6CaOeJ)kMw)oFZyRigN@)iLkE z%Y*4Qw+({b`9*tO_N56gYU#{j(WcdS5HhM7->hO{B=Ie30j*GS`RIuXU^l&Xq zn$qlwzZb?yLB7_+pVS??p*S##9ELSjeBP!Op#N-;Q+O#LSh4WFj|i57wUt49W!0p4 zq*jNfdlRAbg^`PWpvpnKsyz~-CQf$?FJc=D9#0UwpR#0)Fe6nl1_($Z0{*_JG zk*W0ey6*Q9oZmJ$q--@{dA7@Gc%30yTmRhyFve5f2o=XWPJ1nHhM)=T$vS3jzE@&q zhV0ayz|KjgUc=*Aa8a(T$wTQ${aLrERw21(^aB<8v?h0;s2KE4zJQ{l@MlpOfuiyl zTU1^`QSpGHf(CN+%g%sf(lq>YQE`T%!rqFCO6fb=RyL1bxByT6c>z+tJQVl=dja0D ztKtyAUVv;8Gj|!V7vQu1;sP89^Ei7gk3xarBJKbIYCDebcm$Y3+p%*!13IQMy8OkK+7k-hX-Pe)HP^$OAcr=fR_&zo_aW$uvbfexQlJJyHY;R zw|hAt!lip59Zy*pSD*UL^o1i$Zvx=%JvQ8ZQ{PDy!lN;UPhN0)A3RrHD4cn`w>~_( zRNBJ}2Cgy4DHbxC_*iMrk~~75#Hx5 zn!R|Ll(#E`=09yAhr1hRJSQ1~0|}Zne?q5TL$5=Njba$md*W^nBS}2Zkj6WNZqI^W z3{N_YuWD-fIh&g4L#hWbL8pNkoFBU*j7tqr2Z&(=J4<9c-nSq^3nAm$Fi6a< z59{$E%4w0cc3UY#Of29n|xXMC6-$c+h?W=IZY+%$gTVl| z{cNYd+v11!Itg`Q8RAmSzk6NgD1ElP!aV`*tJ%My8+$Pn1KNWj1lBp~CSpBKk6g1FdT~sEg>)N;bPLKi%ek5}-$WKsTkMMXa^3K=? zceNPj)!Ep0XM{Q-TaE`hEM8uB`}LO~l+c8wF{?zy>pvy5HmM%}ddsbzHB_L5^v&%F zuwh`lFpz_dK?FL2yJ2tHMs?kY8Q+ry&y-4})G&J3HM-}D$lhAG`^YZ{bsNzcME0u% zE(O=LS|%UwZI1|~0Ds_Q;sgfy@i&zsKV^;9hO0{r{BF{y36A4HslQD0PoU`|#biw9 zLf?hK3`Ng^iqZB2yY?%_sEaA7(Ijt7lM3cU7If)qnnvd@X}LgDT~Wv)bUfp=(64M% z+A&GZPws3iTRfCK3%;s?+-Ple6_Ql?!C~ut^VTjbMaH0g4zzHpUs$;oQ!B9Y#NlV=54@>5m% z9A^*zBt*c3a9;3F=U@%NEJ#%D&4-OD<^nKu;Uj{Fy|?Zc{n1v@p$ zny`nI#EZ6%qkEXTx`b>iE*bGiy)l{KlkyqI)|60QXl-xB^KXI*0ac>`vE)yQgENwW z#3>-CRygwvB>bIg3xge}E^xe&`S`pqPr#&{Kkfh`(_&yKE;nCpcHV>P5r@3#cEP8k z^2`*vK{H8xi}{;pxa_NpyA#YP#=zrFt$C$=k<~Grq7QzumE5FuBK;ovuN*{1{8`DH z#9n3v9wAvBAREX?fC6Wpw9FQ?3P?cR1l|q(rW2*ma0djMtPAED9bZQ>ttNlsu6)hXpl2UK z6RnRPZ~TXjNnDfsJaQdBI+Rd~abga%PXm#zVx6q;6_NS$@J@)sGwf#vS@taaE!wn5 zCv4FXk*8LdVLfGvnkeOqU4uD9e56r|EzL0}OdNwat}r_t9(~`FE$Zep8X4kcSLoq( z-X4^}?CZ28LouukPvCzm*gvf$h*-sE7RZZ_f0$ZvmvR+bN$2ho(+pg}UW`ap(}K`* zPQaG`LeKwh5Qi$-{1y`@rOyRJ0`jhUJjyB*G4uIdUlGks>3y~&OO#~WmNE{17x@^3 z+lfS-Snf@DZ8TTrrptB^Dep91x6*!9N?7QX!(i@1hBIA*vVo<7;W$s@p?X21S|*iW zd8OmMltA8Rn6S!=b7zj~f63)B?)Y_;CRq(*#t<)glcnv+xiuhM zgcpT-K;Xy+tilY{f3IU`W*q2F)rGe%+i19~$s_7QAH4W^RiK3H!e_6&N6|v$B2G?> ztahDYVZ1HA><_9I*sqe_QI+@*Ml!-pA_Barf(^0%8xpb!&Gg2aV1lZJ z18EvOL^F!-&K?)ARM)*nQm>l>fA^yXh%1TP-`k0=vY#70;$Yme8DAt#o5_nxhF-F=Oh;Ko75%%+KNM9_9ka8Ge9z38HtkRM@X6*<%B*q z>DqGoz0)8c;+u3Uy&>}xGCzuwEJaWwC(5$LqNT5<2ix7okhWLy1~yPAYp!ZFTFLAy zWwoGH&a(<`@Gm<7+_FJIq^01CHR1-^io!ZQw#8zE@Y)CQ1q||z6jG_ z_h0%Hf5Ww+Af9~xBe-5*AAnsEw>ne8V6G1fbG$Nog%k9<$}oev9-n|Sp3R8XE%BYK z4jMtWk|;KfI-}iC5G`VM9Un!1%Bpf^Qob-w{u_@8rA8I{^FLQ;Kz#y0$@;&k zdJa_vZo__*L#2c0w-mRv{u);ITwlC#mZuUbLWS0UFAcp$dYHg3STy@9lVZ(B`g1B` zShi8jpH;fW2Wde&!^<+2yEr7k`HJH{IbZ>0{zd-ze1#L9%C^FeS&UKBLytcGW= z7P{AYe!p?f=p>`raNX6^J4+5TVuE~!zqJG_J2U^g@~YHaP!u86^Tl|M%dJG8W3R^% z{2j#d_rjg@%h#<{`Ec~rhJ%AnLF>5-V(3a}OR?Q1NIij$;`vu_k~%r&Q1?Hp=fX?M z2t+qAm;e{_C;fp|Nw+6kG|=kFZ)^@ou4=Sud1ml?_0gL?3@1z6QecUlQIM&D z%<#BpI>_G?LHaLFeGKGp*F;a{p8)dnfd($)<4#bh>C++kTR$5w-ci+1EY);Jzay*| zE046NXJyYN0(!T%1++u|NtydUhBrdvPf#it>eF%y1hKA%`CaR5^iSoBLf7i6&qVt? z`cThw@1v>wkiXP$8~|!np995;F#)XWbgWmq^zL?|EGKzrv2_{|!9> zlERIzxNpd=DaJ))cps`B?o%27xx0tQ0kB5B2fQse|Ms-uhE||fF>%ax8fGQnWDygs zDJ7kt`*s<(Z1L0J?C)3W1ib<EmGLG+{CgmnHsAJy!qzeQR)&KU`q2ze4V8Ib@a&@6O+T zf-%<(S)3G)ne?6vr4f%Z-+aA)mAg%e!?G0lS9-HbHk6(S! z+lu3Ur+EBeExJt7G*jjRiELi*3Re3o3FoMneWkhp#3Z7d)$7r!%RrVz{%#Z!FX1?R zZOKt$wn-TDqpm)2vnxlID{g@-_vMN)A4s4R`4p6>ephCX9^3~T+vC+A)q+JYSHns{ z1SLBT2oP4AG$|4(1R{|l3D)dLwT=w}W%I45Rn%j=FBTTU67Pm_9D&WGi=uN;e_=e2 z#9%XN&3LUvGx$VmAoW0kDdRxYdu#NYwrD^$C1NF}5XG2`BnQjy|FcV@)>dt2Qso~n z>9yY)_U~&EGjoYZ&s>*OeQT5%wcxOb`hV(@W)5jtMIU=AR9wTzwC_~7fyQXiI}EpS z`iVgPbCTHgsz%s04Zfh5H$AC@YFE;e;F)3{z~|bz{~n*a63N$XEW*j3xorfynNrxFwo47`Nza{{0=9o|yyU-Hm(>Y`tGfJz>H&p`ql445 zisEl^wYKF&6{@atDeq{H;MmEVz-9fv`Sj5j1VTcW;vf3YEy#xa3e#ogh;bZ#C|l^` zqf$44`?vx+T~;#CnU?FrrlRq;4=Ec5Mp@5y6Q4X!dY81n#>8dV8tNnPUNz*^vTpkv zu-!Z>o3-Dn%7n}4z6NzX(o*Pg9;yn|F$)nH+@{zIa~4scm@Uw|lk{j^&#I1axFKso z7bU`66mqi6?KwI)Bx$=>Bmaby3As`x#5)?)EmKB+TB&$k7%fgWpY{z!#B%vkx_l$r zlk6813EcnQM|H_ey8ZUPfW==Ybn|gK8d*Sdt+`w)DiZLp?s(6%^SvQ1c4o! zLfn13Q<^!wmT*d~i~KTa?9lGPy1ZF}eJVwhYw|AuAM;vIRxeHoSrqhN`!H2u};ClFg%7k$Y2a>pDb zQFkzyti+U%B>rWG1)fEW1p;49CiYW4iC`gvY(d5L@6g=Y{;@XeAt^!NOWw#V9&Q{% z{;toMd?+=RR#KxW<#M|WQlX@`DzV|F62p`qWX&M(l`$DjF<;drF3i9bV{!E#N#zj^ z>cE5EKpAR3>bMg(yECwbvLtiGPzV)DrW;D6e-_Fg=-c?z_)*>eccE1IT_}H+9cMXn ziPy6CLyCA>WN4bUAGLIFN;)$X?&)j^6@gqvfI5${*wNq0g$fIN;oULf_QO==}jfC^X#f%p!NSa?v-k z`l|T)&>f)pV}1UC=Kp{BH?jprmN9?v|=gEKgz=Zuc+qU6M!lnLBTF?_>49K}u-od<2SFvpcW?%7VusU@WUQR|h=g z#Jv>dcLT6>-TLzPoWE3_ReTIP=acb#6;H%&+-@hgT^}J>5M&PF{R5qpd!4W+YM~3- zYjCMlSp~u^7RiEuo$p6ZF_y*6B$=@OXKUIL&&h4((?;k%JofX$t*{F$3#zx5*E;#i zFNPgP>L7#(up1qiV=;lfenT=6^d6{arRm5@jNAR+(#H8P&!md9ynpfHH#)!$bMKz{?R;B0 zEKF6Zme&9o46^`hb0@*iZB8!{vjdkHZq(YD4HxPZOw}GF&V4Xduso|l&TIKG73dDB zt2EM8H_yLC#*Bz~raOZhod+?cvk>8J5WTLi5ywruFzr{bAnSRiK;df+&l5Ln-JyQGA677=+x&TB~Q& zr#N3=R8Tq^Xf+tPeD*Fb@RoeYE^==v^`!;u^zeWWrSTtU772KipDUIq2G*Ba<9t8s zNsF^>U}p|O9@rAHPC#48!CG1@p8xbR_Ayb1p!Aj6!b@1F7vjP>VB6xc&X&w6oJ+@= zNKG6~yQ)C>v6zjZciPxbdFt%;DADzba^{5*R;FSfYYpQo*x1-Dq)N__leUGLm33Py{4KKDv(F1;M<` zL9$$;dSiKPtQJWDoq^NoLb&w-OT+GpF@)q4*E89g+lwV9c))((O^Z<^S3x*7O{tLM zf?++-839Iq{g|m!s7E+G;nn$*;DnW(#ANUyBX<|K^eK|ydH?;77ewsDMY(?C1#znJ zIuqpxFDSLxD*8blV12W=RJ&_+uBXx@sSETBB_Grw;fu?4!;&sVqMhAVMLv5J8T|Vh zj8$F*$r$Y5vU=lpYkU##YPH4(^ zVvrOBX};BAJC7Z79Gf(Y(P6Co0f%wIu@cy`Or5&%iL)j)-{YR0%=nEjnN%4E#{VApOU!~Bqg>?UBw_bkK!GXOne_4X zr^kjeO0Q8oUlH%S#j0PFEt$RjW9|L7C2jjD^vSAOU@zZT>;~0l1eewCvuP16U*B>a z(ndcTEgufVifGRfgtD`o<^Svv@I~lFZv9dbvg!*qTYlFJx9nBSpSSFf(i77J%_#Z! zSgH-EC6R>E8K{f2a+fA;tI@J+bkZSZoCu)%nK@!6dT1Vy2UrdckYD!rY7GV`rpbkBaBcQSe4lG=V zsaX2e00GN!;10#R-cAl-TN}IFg%AF`FWfC#kdG|VuU=p6=cbGB2shf_UhLQuWDHKSdI>5SI07O;o% z6AQ%Wuw45OO5#5lPx;wct-&(ZD1UEBpdhovUT>C3s9$9I zl5($@&@qhk@MB`wB|SA_#)r&yLA%3EzXah>v1ky`zc^JEMPU!FEso-wtp>(ez4vu< z+Gm2^sh5(LBM3@~w%Kn-KT8>f9M4si3>in*J3Wu5rpagnk~T5ilGS_2kC8EScN21i z1JZ&GuBHJ!m-D_oq+sRz$-eZ9RPa6}rpHCIjUDf5k~~EsAD%AKl2O>w0ficN5^xUN zVtd4Xr>z(EZ#-~vmh!_d3?pf^U2+tA^v1eWcJ-#@6U`B1b};T%V!yw;HEZ)^Cw^~X z2I>_b8r`q@4D?YzVw+U`K=0$N^my*EMjsqaH*t1aPg9b&AcZR+e$X-5vA9TATP__Ll@i8zb>}n%TKSj9Lr_M00+#Na7Dp3AcBk4`o<3m?{Zc7 zVwA+z3UbP`^M2rddtP73u^~o%5vwhcrH`W}cZ3IFysbd6eCsb%X>U)pku84c=$0SuGGZK7| z0tCDKhg0BJeMtNU*Up!1&80Gv_3mBZC6rd+%A%@=R*xKC>E09ILB1u$<`t`O<8k+h z9MtCJRLRYC?1o;=mSg@3W`PW~@wv`O=0 zepZqXv%Cu$lrB7`gk)7+shhpX+B*|`k(jMcz)_Mma0W#=BcaT_9T>sKTL=c-+ESHB zqA~37E^rH^{%~76p%5n_VHNNgpB5DFY(4|O_jv;q###=G>l5R{l z5Y=$sRtMK`rRR}}&-SppX~O=znVlJUXpbk}yj;0E|019Zd9hWuhLMs33_NE_tjn1* zRQbx63b&qY>KLxC!Gs%g51w~t`m~;4oh-|xc#J3X zvLVUZ`(w|<)liewj(00ja8S?_#HO^UxA$0{;u;!H z%lAUsHvL_PR3o2V^3f9-u5YG_?!O}e(M?&Hon4rE_z4uEW)J(D(@(YNA~o)NpYKGf z2pK}ylm$2keJd(M3Vg%qvFZNaolo~;%Bn9@p*rWhJ1|~S0!q978^B01cZT-VQa$>N zbKWuI1scJ*3Hhh|=sFVw?73B|^mRb8q)`eXg1Mz$VCKVE+HKBWp~_yN7kN%wqro!@~g6GA#zVz6jyMvCdYdWNCMB>zU53jwK_^HGX{x(=7EjYzJ_;O8-QE=h> zLT{R{08sm=tKd-`%d0jYu2T>8yhh;X@4?=^00zO6$mQa>|HUm z-T=q@q=d^vY?`aF{Be$SUAcApc=F*>c~C3i6x4ztTqnHa`MM26t~(H_m?0+RFmR)}7KS>r-_2KZO1ZZ15V()3143<71NTafe- z?|I~SbzE$R11v9dPC3B&ow3(!m_>&`)2rgXD2Y0)5B9HKczHTgfWM$9hriJQXT&5 z9IYy+`0?~s*N~amG#z7#HwWFTZfPQNfsf_5{}hwB3;{K~oSrT69rmW5c55~~ngJtc zB@=Wo1&utPL789jxiwJ6m_l&OM)k-X%{wS4_dqfi>zgZ4=ml+M0(tL4L$j6_chijZ zuhrh9DdOeI-FqT;dvL%mV)oK~Zk5KjUP@Cv4g4@^XFdD#+&Iubk!qhE$@+Za1ho78{Veh*` zHaJ`AiYNVE4eu!#O!vZ3e3BU0XCEcLKG&0U`)V{nDF02Y5{prx-7!eOIgddomv79H zYxL~lZrl%*T+Bo_scqpvH4zTGA4wQ=q@xoLKvR=N`3{H>IHR@gWCKphepg$0sydc2 zF_NdcIqEQ_<&IvV8*W9bFvxKu|M(jZY+-f3Ff;3AA(sZ0Ok-!dR zF2zU*pZ(B#aw|XMENbf1wS-L>+4PGv;^_CTu{qv7gS*R!&fTCHlR-_>`YuS_Rq*X0 zXp$4m;|c^Xl`Ui*!np1Y;>yA{eH4 zC9zTpU`Pt8+eRBKU}%yem!}U!5niol#tG@9Vjzei4(y&TY<1oPg`qHZffy@ngBH>%OkK zm>nfgLbf97$zL>sdHoa>Ha+2ykN*Q@_uoo{QD?@-6vyTk#6`vFR#pEJM z!XeKlL>Fx|TaR4kHXpdmW!9IjO;vrl$G0TS1y)+*Z*`cR%byro4E)}2YVrLpJ@OT_ zy7*hl#pikb*FyvRBP;QfyDdJuq2ck(_gR_liohDt25}j(e(=E+fV2&S+)Pb}!~Pq>6Ej8FsPIk)C9^)PAlMMpul<#!*u3d!ywRb3-ML>3z&k1#i)tb>ecHNh#|XSThC$!b&}OMqQ>T z$w0elSB}g%q1S>4j1%~1iq6GbpuLN8LiVh@-jV&gw_avE8UEXlqWW=3UQM&~#K1ds zb2wW-J)YO}8I7x`(^z9mCMr)Q0s2rIJ(u_i0s5>DY2pMp;9> zCwt`r@NIVi>uZdHeX!0wH#R=L(}rWa!J$%xVlcouSoG4UbRjeq-7T5X7rY9in{ueG zKPqxY=6T~)_FV|;^f`4=AC2vqv@eEq&9=JMBk!#J7C2;1D(p7@XyIrBumi1MYOXTRM3M(zx%Taf2U4fayO#GQyjrQ^t zh2d05{L*&jych@(7@h@fotdBmCVqK<^zc#a@HArnkGchS`q1>G)=2dmJx3098V)Sr zIPxP&wbp^!JcGqo@Xj6G-KcEPSr~$DsBHgPBiF+^VGqoFyz6ZAT(XYSNVB%nh>2=8k zU!?G7vSv%f)!q&a4AE)Nk!;O46&nrezlUQ#P;|fu=^dp0bB08qF1QTL^eEf#nX9zh z^Hj5x#%Asq3AD=VHn#79b%_CZI&EPjrxzZoyUO*qy{0wzbeeDc-mo|Tdv=AEshkAa zz#gJy)#e%8-kXJ})k_Ge|oo|QkXQGYv1Bt%VC zhBeQ;-&S|iIgRzBzJ zLiBMbG``=O3VJ1E%`$skjKW0NNZ9t8CEv5n3`|Wc`MB}Wxc_)Ytf-Ou8t&GQj~{>W zCLNu`K34OJ4*fp>7AqC1<dZ(zfsq0L_2sd zLaDyyb?kJ><9B6FYSBBC0OngIJ)dqc)5Z+nb^hZ=i_@D_w+&{S+lCLzxgUdrXR%vRuwFHRSdt;lI#85rXr=cf;k^^nC~RXGg7-L|>jNX0x0ZiIQU>JR zxKoqK`f#f4ZOqiY1OVOkA~eYtj=s;h?je8a0J43qVMhgEAdvgiEpU}oviiYqohi#D zs}OR-Wvxhvy$38TOJRj;u!%Xua%);zxA6K(D;-b2{8McY4jNILAqe1rr3e{&s7RQsZ9u9%SRQB>-SbRLmsIcbg(XaL z1UT=!hX&D{X$3ina`MBku^N%pl{I5*6J%0#(q=(NrF1kS0mGu*9xD70qRHZt4tD3{ z&ok-WZXG6CiF~fjHZz#3%rjC>)6uiMJai?6Eem>(9 zQPekB_vGrELpvTd;x}jSY}M4tgbOJ5wN7Rnv~>p~0dsfGRC=`Y)djb!TPHVafi{uv z>aaVDS4pdvEc1(QD=(u1c?93xN;>l}Uy8bLT}tg)ue;i`R05i`l&xqP{>}{O?^zq%z{TDla*Pl-EHT2YFjt@ zmTdk4qX#uG$t>+LP=+;a6CQs#pe$I!&r!Y{L#)MZ#+Xyxj!$emUJI_-#-I|f46(4i zryBT$9O|enQ zb)An_?>+E5PHAeGe2%HCmZnhk6?UPu*{Cx2nKprM8Gqk1n{wg#Mh>~SbZ+7dHqANz zKyUdcgcQSlQSt)iF;x-0+LOe`i%nJZTRUcybPEJk?bYNP%!dmT>xN~S5)FS`4IFG_ zaF-yjS((JRn}YVKgx!mtq@wf2R8Z8!#i>Kl^0N$E0phxEWMhb9OQ&Z-o4!RgbZGnm zV=*MFY?lV!7;GWF$VIR6qINCU@k@T8q~9b`8GHksjJmYY-Hu0~Iov5*y48MLWqzbW z8xPKZNy^Av|9D3uii3TVJ&S=ptSgQ}XTwm!z$k zixRwEz=h6i1jTX}zJ4j*pKURJqh#YpfR8@@5!zt3GxCJ@NTPWOTfcKS+cVU>Gkoay z0(WbK=RTgINumNT@heG^d{ne8Z_N>>`$+4p2U$cx!1}Og2$#Z+A z?@C0zw8|Ul8(;PK<&CV{lCBxDvMZW7XNF$)K5gS|a(LuAt=^6HV}x56y`f)D@0X_5 zAK&Pi$yjIUy7ZpwN#AmgBSmESlVQO;gAuj5*WS&}48HtrY>apnY&AAoWu{>cV|HA_ z7T?Gk_<6+m#{uvr?=x!Iv`axufLGVx>rF15~R-fVR$DAQTzhk-J4FBgtuAkHLFVZK31~n_)juudKx+9-2 zl+)nmNRMx5tD?R*HsfgSkPs#qHKyEV?R=4%J+VK)cP{m8pD6W7q`*%5sLuM_hbJ|N z%Z}HRe3to(@ARUY|3l{ceujyR(bsSL9Z9)Pt}_36MoE#iqR^7F^Cc;K z1ihX0z*<8V?LsK!qz%K;qF&z@OZ{LKRhAl0@bv|SeFE7J9^*>_!2 z_U+Wd`EMBQ)vpg}jTP(hIyTSbhkT8l^cM3ogVQt`O$L2oje6UW-M_Q|5S1aa{?M5N zf6t@&>Bp<5ZxWG6I8~77OLW9v^~X{9WvSWd;Aimq^Q|b2YNC)7qJFuV;dXZCrj#OR zxktj+$&X_AVZ2dgZCS&vIvXD89hXCC zZxzw_Hr?Gv>A!;cGxBs*=Ib=JhvY91_&;X~Iqe+A{8jm!iDSe*iz7>AmQtdTTZCg+ zN8DvOt?=WA%?M%Tqu}j#pj-R%QMh_g^xQFlKjeg)qja2!kP%e&KgE&}gz9q2}}$&awqz4)Fv$-wXogfZ7GsYiHoy&`iKW<$rW!D7cZ zitX(F#K%StKT~b?guw#_RkJq)?>+gG5O(d#AJJ4FnP@=Nq0B}Y%p&(3Wklu_H&u`@ zc`SLbH84l&U2WhmWvHJ6i*G}bF?0oRG{9|c^>gYo!SUpQ+fEipS38R$M|AGlF-TMA zwAn(<<3(;c0sxgSnXIfd%8sfC_p&!#iSBXR1|D?(4_}dwzLd5c4UiY9&@fzraeBWy zMcPnwUy^>P>YE&6E|j!xN+(8_t3S@dVJ((+ifm86QbRp+=g!E44jFUu5dXQ1Cqs5+ zZR#gDV>>R*G~#Drc2oCXMi{3KF?G1v7NhA5Uo6_r{HGl}YZlqo)AI%;_6j5pCE*IQ zrvzlwTdbVJ=%dl2Rxz*X)U#vr;+HWTE?e5Yy0jKo%`*%IF9ea0lqsb=Q zX;KriVA^c%d#&$kn%10QK(qRVbMYL0;H@tVHtGAocw^v=u!d!Qi!PLDO4089fu`%m z8ecT11&a-&?V+(juy`D_VeJq~%I5UpO4EL%&P0i_8aYLY!7cJ*c5fp2xyyCpxWfmg zFV}pVIr@w|=CY!ZGa-%3i`N6*M+j$1DS8w5y_!atYxE@^PK{eDT4cEV0$`}c6G>2Bil{Os9F^hVF6dOripQjCFgdMuvEgszE|3eisxGKJy7(cX!d8uGa20(1dhtv(=-NrIlrVN|jD)(3LE`IHh?*MRuY&7Sz+hOM_4ayf1?NJk-?cQtxTX5e7+pQV>bk$xAW?5j{ z97x{D|M*~X0qxzCTlJT5Vc&E>nZH-F zLQ7M@L0#f{S9qdPGf{1GN;sGZl<(8db`i%jbRJF&`ec6jhtF&%JqErimNI8X${7Pg zT?+XZ=`cS>YWbb=+`)*|#+j&d@$70c-8~!MTs8HeNt4N`Y z87|A%BNZ^|1?a`_sE1f|#a|V#?p%rda66M1g%A&%~)i z=Ez7fa1h~n7nBM{foM!i_S19+Q_Jle>k)gM!prh`C;Dca_5yG}oH7U8?Is-u0HK!x zq^2qZ?WvdL8@&r4;cw&3AcV~`E|1xZfLD+rf{TXDFr{tfg@iK+9>CA%%->tZRj18Kx3S}&4i*rH+`QJx`nIIu?`6z48 zeY?!yUG>71M+pctzo92{MgZUosyw+@wF#HWSlJDJ{qbev6H!mu?u*hB^r!sDi*%L) z(iHD5Wiveo<5sna;eM&~;3VYHNQ8$qLRgI#rQEG4_QnYlU9{5(TN*I8{z<1WIv(%V z5Kj&$ZZIUta9h*NK`_>kuVm{DnuCh3`Q1-0d27D|vK^{$-R7sUrLulSr^+&N7uKK2 zHr4Ztt3Xf0B57J`DW*O*><{@TJ7bfbwa%-SWN?G5tvLiESSdlJPRU+gbox)Z~3bgfsrroKLD?a zcx0TA)2q|bp~fITsk7rE?)80jXm;SZ2@jcvs6B33H~yce|DJ(AF0D<+Secmt25=Zs zE;Bt~O4?;fkl?A}#;%+b8Oql-h8BVYMicKnZZ`ishc8>b(p0gCo^Hh(_wIik{@O&h z9lJcyCBNgNdzM>S{oD>vYjxo8Df32SBlKz-7ESB&ob<%zV$jj#(uv^kKTl^6~7e{i?d{1 zGJ1$s4gl3D+gi4HQWL>F(>xFO&Yb(e(E`|Z-g&r;6Zp>OwAS=HTCuzna;^n#QI%ln zGp1o}kzedHVfz3oQXIsNB0a}@&P^a!`p;a5EJQHB3`-5759%HM*aKZs5|i|#V^pL& zZ1hl)$miWTBN!2#Vs}q<-n&#VkFvv^CiFI-+u*ZZX5skpzHcn5Kje-BcZ(;s;eF!) zGsB{I2Ny857_G3nq&0F57t5V|$JO4w1Jq^ZV_Jbzo1hG;Fd-NRwqzEhGrWLvR?bh< zRMy>}FMSrY2rN($$}f-34GKCyvXd@?#_wv=Tnw-Zx2&HySZtZM->?J!@#d`0C2J0( zbVB_7^7qd-uX78uiVF+ufMEhvK?RT97WLuz&ff@PcsclKadUF`tqO)%BsvZiTPe@K zaBc1Ozr#kwO)RH%ADm$b*m1UqfJG=QNeYm!?QeuXS+|=lgRpcILaa(?v5JNulzvg_ zqKSx9Bpo;{ODnYVx`e^wd+AKi1Wo9998!@1@F6n>r9eT(nZ`&C#r{l%(#N7za!Cw^ zN2bM&Qy6xO9JnDTE5=pft!ZMUs-d$NC8Bv+I`wVLy%do6-_SKwx3gQ1Aa5+%H0|jx z1e?*Nl0$OojaDxUv%@8M_h3Nc10`iQk?q*YE2Lmgg0kb!Duj6_)}9Kz{cZXOz_Gas zydWIi83P%bIq74NTgD^y#IhctlC|)Zc@xR4IRT9mifLschnI7S=sd3`UOT$XRjm$;$FpXsG(Vmo-X zy;EWwUJ;k|LC_?)ytT9r|LW42xi5-B@siKIX2RU`yKaI{hZ)B%;#2m6l?HYSWF-6vxw5=UYu zuO|s)8Pp1`cK+COt{3X*eE4$j#{KfJ=DohpKcv)-No4+n22h7esyQfXkj|(@P0UR` zip3Vb?T{f|lH!tbdg}7j8Jx#6?U91`IzfvdikQEY-^fUMRr9)JyO-Zv7Mmiqe2w1S z{*zl5ecZ${qC%I1fSxO0XPUXetkc=f&gi{Q2Kv*DNNt&u@tuv~f|4tG6)i-51`jMP zk0liJf)8#OKMN-J)q~{&2s=*}nIIOgatnG5!;Rz5t5sEzwniWxG=I>s8^mpJ0wJtX2Dq-xTG=iaJn^YdrH-OR90x^sft`kTZ8m-%4G=7OPj+d16Q%SFKz@L zZ@O{a>K^Nl{#3~J_Hdlgc-{~t{5579?^kO4>y=Y0I#0hxj&bH?JOoZY8-JvI+7G<_ z)LAykT6|xO3F&0!PpKMz#HB!grW}ypB^7*?f7l*-c_3 zbuIu>kHgtn0C%T!o5u8^+izt;&zICZ{84xiRYC#bgH~*lG)BT;4wTF_W*9!Qr0*8k zrp;f!_Vh!XW}VBrIUg``Ob6$`iW$H2$GZkTWlVFejmENEwx>fhze(F=-vTbQ>P>2f zl-#R5(E1Elr*K9^?Ca=ivVR*l^80&dk=^N18*~08!zp zfyKD_AoHB(tTuBq(Sj>B&b`c;m6l=#B$GCXvz8?&a(EttOVu6Q<;w2xyu^a$bBDW= zw)3$qHnGiiG2?Q#dh@~D#P$beLp@eocOvDyRm`(qG|&@kq!W2u_yI;hsy1>R*SVXA z;xt*4%4b}U-pImr*F7DLsVDU|nCerwam`zR*X)Vj;)UVA?4_sKk1@WsiKUi5GF~#eOhF$1-hhRpQi z*sJLxt;a}wN;iGv2VA#a_XOvPnHt=QtG?Irnu%Yy?wVKFQRS+?(%rV`1Qx4o=>rSel@_6UjmD0!Y!MtXTYDY82?)o=8LlO}0I zx(Xf=7q^|tYh+H2q|}g%P?}U~*OsW$JTFFYJhqL|*kiZyjmoEb$HgIqgSz|gjulWa zb7dQaH*{DoH9iem*@TQ{?t|%SvQmusbGa)o5J~wxJzP=g9cJO8kx^-SEdTNM`D;wh zOXzKKev>*=ndZh4s&scn`MTz*s(W`jZeCsq%pMxLd;!c#ce-pIr)91K7nyG)!W_J3 z6)YrX5OsFa!dz|VLx+?7^tM^$55M1TB9;WE9S`xU)uY;IIm0A6HAm9f9AQ5to++u! z=tQMt*0}GBql7{Z-#!Cb^S`S5?r^IAzkj4sk{P9Kl``TOMON7>Bpf4q9DAnX813wp znUTFWab#7=ly<-=XgAykH-_2 zxSugp@eDci5tcQ%Nl#YqrahK#H}QslX(Cp+YhMOUdx^v9b(t=1ndJVq41GCtZK(2x zk!xjQ5tu$T2`*vYiBpZGDG^Z=lh7*owiN2h39LCVmBLkY$oXy!q~Go*Gm(pVdg4Sy zon^Rk+$S!nnCN89D{+Ey_35RLQtjuze1&?18>p6f&PUIHexNS<8ibnot_<~T!>8lc z1g+TX9h7lcqihs!+2_r*{(52pPjyO#aGCyxDd0=I?S((rks!)XbW|y?<+0(?^m_{H za84!V=PtEzxL=Wg5h$^IaPOy~Msqx*F3p|E zQpUzfI;b)+n+`$hs(=Q8yga|ALYh-1(48OUT%`~c!R3I4`G*CIOfFX8(sp$!-L<7F zj)Et(nI&T$2bJE4;|aN*ZAI4{MKgA})hrC+^rz8T2j;H$5co3}0Jchx)KO_CXIA!# zN-{8uunzE=t!_#5S)2TD-=T?l+^b^gDVxJ??6C(b%pS}lS?L=_ygL}4pn09Hr#F`{ zSvxsIa{tl_MDNWj15YBA7R}%H{)*JhxY-FVe`1u09DKy4X&WYcEQ0kBlEg3Y-c3c)a}S zdI1{Nvgt^!9aIBcqj~5VyWz^dc<)7g=XLkw7Y}&kO^isUaV?3~zn4u&@Feg$D%JwM z(F*DpIpWTpiBx+TTWack`ct{j#%-j(-^joLxC(`ybHi(MFcr1zkh$GvgsLO`X(IwPz;A3IWpe?D&RDe&<8TFBg$ zBia~5=S2F(9zC2wha>+$d&krN)`idQ^ZaddLg39S=1;nSjc!&et{7|5oeX< zyrPB7x$)ZWd@`{{x6X$Yj;%B9zV;y~fJ`_qjZH}$ISLV+U}AZE-zM#Ymp!hVB)P&@ zak@;iPM5djnp;rc*KRnTMkWHuU2|_HT>HKkRyl>mrpuRh^h!)bT^$0x+f~wP+u`Cf zcizZ!o7hl93o-VO9%N+}k7tM6(r6d8+x*fQlMdS=+(*MwL#a-WL*$o#%jkqLQ0UC1 zMJ#*v8)a8kv-|HBMy3p z!t8HM{IXtxwo@Fi%c>AbSBIghcQfEu9lwt@&O!%|>+YlO8fG~$R|vk)voHjN$uXog z{5f|0Mn7=mL_w3*6XZ~ZS`G>k>W3{pJJt`c^xVF9*IeI=tJdXNoY@En4<|mjb$58t zOk86+qo0J|^fh6hhJ-aez^|zPifWB^Yq#n-yob4j59(5@UP|4!=9r5puCCYbOKu;3 zvTKB|bSLxI&V;O`?7Jf9dzt|r3bmZvcMxspqb{%MX9La_IwOgk7ifnOKnRWi5Yv0ioL$8i2l$VW=i!t-jPwbfPW&%Z#ACx=qKqE^vt?$Y!M~H33?=L{AIRYl_ zEy1Pb=o+bt=0VlRq6Z$^kK3=6-}j|?^9M3rs<1d)-CWZ!E6pYimpwBZjCT0#W2L#t zJFYK6x3;nSoJx|U?fJJ)3nU+M&CC%g0$06#6!>EDYJ4SE8fkHWHeGI4HU#xYF}LT% z4uhplkI`blo9)lJdWr-@j$Ult{H_S5M!+j(BuUj>nBjs>))AaXUd_+>b^^4RPY=Tc zfP#-$AnjCx?S6hOvg&*BFG$;~A*7yCHNltzUr?Pn_XR*I@z!O(pfc>jV(9GeHQ#!Q z@KAE`q$s9`Bazo=WpY_TYE6>_t$uAs0g!Phd9zg6ZFsi{Hb$xSE%%{EH*!zdvQ9Izcj^b|25b;IE zMf}#(=IVBq(s!Dl5ZNptjy>T>VJJ1mrMmv-I$1pN>s!wTt0A1n9;ZV{=egxv`zCn8_=pOtMaY6b19xN;5gzx1IoiA8`$}Iy98vgX0R%kD_b*@ZUAyWDiTTt8 z%gFg-u96qJfGIWsyoed#qdse{vKL(Uf~?;WvSiyxY)+$wQ-_Rz9N(QYhgG#u*B-RZ zA}_)!pW>~v2ttYOLF@SEbs6TNK{SX2pEz`J5&%}oFh|Dc-B`0!x3lu-uIee~xIhmW zMSx!h%lb2hz157QM<-=U3^J)w)>S=DwStME%Z`)o*({7Ef83VOr%?qhC5tjTR%qHI z=QUUsBGI+bHWW(8l3dD(H$|WS4lfbu_0RN%b_sOp-*wYUENWn5V-CG5Rr(rO!W($3 zO=t4d`w80h(xMLg$QdA<+&C)c?yTh~Ddf0egcJdv=o0nK^EeJNKd6;qT07cKjL9Jvhn}2T_ zhvt1y@l$&R0{(!?uPXW~2~;9Z!}+vJ^TvZXkEzVE0%~f$whkhaB=C!HXUGhDFR1a7 zPA2f&hS9AVYspVKY;!QY6N>3vRrY6f@-^I^Isb&#+yYcd5fZhpNw<~1+2=edI;gC6 z|F-%50B#Y1d-mrY;xT9KceR8iwmEhJ7@)bYID0lwb8;dlNEg!HYvyZ!dG~v;K=a~c zG!ObS(x+H)P)Z+Fco(_1oz26seS_Z=O5ZS(tB@WIvN?$#o+81h^2*JKfMKx7t_y!` z@)S;~ZQCZZ=TOdrZe*U4gdLuzzsOjs0KHKXidAJ{iV}+vM5)*w904R%{kOB>2VrsB z4um(_zcVXcWDzK(PoqA^0;M!l3=l2h>Xo$uyPKFQcI7(oD|K32z}vc+|GG(~*W-DSGN9mP zs@TgX&4&-2D#Q`bAO#8DB~I#}ZS}wkDjG9bU3nLeJ}OkG&BE>VJ80>-Ojaf;ftBG1 zW*-fl#kQ8YwEY-U)k(e_x$NoT-{dV>&mPm>SHL8Id2T=W8G1g)(rYhW*o)EIl!RJ} zC^!V8Bj16rk-P3aLBqjKvC3V3u^edr@6}aJ;|LesziD{Bq7fD1w>gdKoFF%5PO*nR zDf}u^8m@Ts6Lbg@EMttm;0ulNP^B62+2tou#To93VQ5}eeh}pimU$}$PQa+!^<$;O zZOK8zUT?4qS2T6c0)o3~U>n9<*|X4l*ey1ejpfn-8m^qUGcS9M4jswkY~HlV%qG{< zCSkLMhTp85ypM)ayUzUWj#i6!X_=MmKF(G&ZEDS*5ZL;+$m0c<@1A~IQ`JBmYWq}O?Xh@&@~O0 zyC2Dja1BpqrQp?7X+XmNqRN&*V=BFV?7;QE(U1yGDG~{1M_Wi zN0;diQSxA#wenp^hi4u_h~otPqK_b4-xt`?o=AC;%s-aOc_U~K zPv@U7IqVii^lVzs_*^d9XrY|Qh0frPUWF*@V}V@Y3EYDl zcX!h|4zLs@^_+2?hWSDSYpL}?<{y05UkoII61Lz{ooThuysVNFx>KIKBSZ>-A|&PxJwbZ^O!0=n;8%|8Gbf4lyEHATMn$x6)t;0U~<9H@ZL* zI&KOC2fUz~G6d#kh}P%(4y5;#XdOI26v!hDn9~L?z(b$|HIQi>BmhNU*FuM6u;N`e z#6iPhdcp38o-v9^P#(}N`_TVF?J??pqdwx~_R3kF;F!YE1`fnrd^K-UKTWR~iRN`j zUEP)bPOBPP6pXr(BnUtV)qkpQWdMaH-y0~a4jV{olpY;+)WiQ4&3Z%&%^r7r6S>&;~U=;I@g1aisO-Z{dUqFG$N9X zoAdrE?l@R#3DZ!A-GFrGf(ZJGgVz4iNJJM*yAtXTJ}ioZc<24v+>ZU{r9VD#^AyqUkzBsI?S8#>IuEB(6G;K)nT~04Cniv@ zEx?KLb*=!JwQr-a`U~rGsI^ojhS04(D1&GnIA#Ewbjgd^*STPv6W2^RS&$G}$Jid+DUi7?lu->Pga z);vU$eeJCAYCt77S5?Q0O^D`m)KnaSY~Lp~e0-&0cdxc_iJ6?+<8Y+7DFGeUvy90r z_gMR-etx1|I-=wdj8~Por5{j{bn$|S7|b(#!@&G*G9dboApqpMjKy+Av4MrK9+4Hw z8iKM`>3a5=24C9r=^x1pCzr62-QZ`%%y(>bs2k8?uC6-uQs-@2*jFg}5&<_Kg`q{~ z2-lSDktsW~_gSuY`|if_EfrhpxS2jEJ@{f{ZOKj~T&0r;V%v%|qfkS#`N)v=)NXZt z;y_KVbAFU?_;M3fGKiKmXF8$c*QzQu8LrmAYl;~diVTvViQf%^w0>U|nmHIrO!T#! z0#Ah}Me7elxb+yM317WZw=ZO=dtm`)$4`6et*jm`jt~~kD_D$5c5&TUEE{PBRU^a; zkom6xq7teMCezBJQ&^ff_gvAyrku*+#&?ioNk=7}b;#f*_W_T_D{v=$lb$c65g&sVU@}PmJcpo)e$aPFn>AEI zqQ0C=RysNb;p|k0b6hyDyhmdPfN%oMmy+hIU#|kTIaS@Cj~;7=Fk9e zul)#=+;0S!L*5V{Y7Yu!-M8@(PLo_;DtRQiEj9O+wcq_b?)Jy^!^$@^Z07{cxo5qs zw>8_n5L*`v3JR< z9oD@;Pt^7~#Q94A9na%)EFWR&f9;|VDhV(Mti+zhNtP>)`aKI_CBpg(WxVsHx?u%5|JuTx z%xP37az!HO{G-OI;JEb_gXBm7-YNS2UWK58pFou-A5G}Vvnw2-|*c}tTkX91gcs&eSf-R!D#N2IYy(v;Y4M-UShFy8b^^^pP3T8W5mr1+@y&J1 z0$f6O(KBhYykAs1Z3AoNJCHk}&(JU}cR-TiyBw)*d@}99SVx@o{D*J<0$d92GaY?m zNw0`S>Z;XnV|A@fRc-}Ow8zwdxQdjVM6p|InD$P*^0{cCSEDvEnv((aolNUHm?wL< z-a!qfj8!f@j;y~+h_t^1kvc@IN0=4GkU}3`+*ZfhOyA?`CYsrwC;RjSDw@wIH$y)C zSY_1md0@`kO|8a;l!}yzwZ}is3B)k}Ha+B_RC+*HwS4aUUtM7RT^w+MD{BI#{}^ZF z8@u%=>f-zAz7X_0Ea&XEz;g*T5jJ*+G1}j3Ev6JA(Sx#-{=7?WolY@=?5=slqC?VQ zcoiCE5ngQ}`)Sh! z+2C>n|E&iTjXs~KQWBh>wvl=OqVw&L<)o9XFADXLwM= zZc9;0i%^*?TcsUuO%eaHyq~QBb&Kvb$6u zplPu!et19ChMf=A(Iq^?h^kk;p+~}O2JLFmOPN0cBc8Gkd{XejdmOKf@oq)&&@NLe zkP+^UY1hR>lVjG*xz6~Xxbs1FbU_m-0O%wOKgk?=x6jHWeo>L<_gm&OHcWTkrj0b_ z+};??TjelcvM4xS>`dx4B61tu{nhco`(uqixHG z@2XbFf55zM1xle^%2y(GBOmxkS-wwEr^EK6k<8-ugM#F$qF;QUWXnua_kL}rFi#V1VZ0TUmxjT*t#@} z*IHaAW~Q*(OczmQ4Qy(`OiqL9e_`a{QQpa(uZZ?8)<G}4j)}bNlhkvN>Z44;*-ROfsVRE%<(V*C4NIub+RNLOSEPy)M@G`D zSPF^ni;LAEL`kuMl8of>`&QnwSXX4sdDwnf+{tgdQ?peO`*nIf?BDZXd&cgqB)E)l z&0@P(;`hD&{|mD zG*ReBgIhrjvrt^-5%1d2fK*z+|G7I-z?fnsOx)TP8rT&CyUY&pM_~u`LEkO9pUX0wst4w6i(h@@^}!g!Gp^4H@6d~K$0X56G>01 zPKH@hY)1vr;sHbdj0)gzAhreGll6*U%!L;npDfkQ>pfw&di3m{()@`x{(R^kznaPs z2A(AC-X(*-5B)o~`Pbis+3fJW>67l-6qqA!Z@0TY|Lc!iQo8>^Gs$=VU%nO@kvs68|Lb3W z7ebkT2~+?38;0?aBV}@mC(;o!yWjpEJ^$}tCEZ|0@%N}O-K*+X`-O+|F)`;XP3z5H zwcq*Zs94yL$7s#SM&bG}sjlD5uyA=!GHC841{kBfv(~9=HMAtTK4R{N4Q%%_TCr!x zZ#iE2dXe7^ezC#BZ;Gs+yl{t$(QZ&G;wJgN-tj{aMc36p2LF`g)MX1VncVpgS#o(q literal 65202 zcmeFZWmJ`0)HVzVf*=h7(n@!iq$o%uNT-`_X^BlLAP6YkAPCY(r+`O7VhfvY5Tv^s z-nBi#IZupdd}Dm?pLdMspYFZaea|)5jB8$VF5ajr%VJ~R#Y93v!j^mTNDT=IwG{~o z#TDZw_++#~mIDci21)Lb)U%g{YpJ&;G&E{2wO-E_n&!q{XcU`dYT`W037B*34C6JA z@Qb?q4dU0TABR0KQJv6mCY9n@Pq7ctvU@b(0gp47Hgo5XXg!=D);^Fy6 z;uB$gA5Xyi@4v^VVG6?g_lkd>6L4Qrl}#w#gzG;%@%noyZocb(|A$}TQ6L9EzSdF% z(*C#KV}ySAZ$&}!pTWQ*!ul~!di&4!lEfI^iw~x`vt%QSAF5m4xPA*p6p~aa&13}p zYP&H?GbYrh2|9HyR->>CpKDFvu^~jMqxyCw3R5&#$+N63c>OG{pbS!$f3!xAN5iCv z;*$UBT95pw7@@T2*N;?10gt4Y{3L(}%EgJKe$~VvU!ml)gR;&3|wBzb;pm z_RAo3`ky2~ZSB3529GHo?7-<*{lWDPjI&Aa{!h#QcLV=>0{?45|Ia(I_)wGYWj3z( zk@232N!wdQRE)T|d@y~iym!cRd7c@@!ILEup0DU-eVpQZaV)dW8m%&T7W+}`MS=3A z{A>x!+Z#G#WC2s1cNz1l6HZNZQp)!p47Z2h8!xd4$ySK3+D}pcb#x~QC#<6SiGi}i>NKy(O8BWG)Zq*=iTk$GrgP7Thh_<&ato^U!+ax~CC(^P1U*T*EAGu29 z9@=$$RwY10Z2wQX*Pj@_g_o1QIe3q9Eh3XYh^hn`>h#y29EBiBaew?LgUR2MR4t_2 zdWLnKyxwx#ra!uQy_*a=6x3FE6qU;BPiF3;(2dDrUjIJ>74Zu{<=B5d2?8(q^{nnhw@jl-1rBV!^ppAZfi({^!ml@k$MgmqR#dY(`}JYXtVUa2ZwMns)-OZb zV=a&D8V-dZy*Yp*U;muRulf9N+5t!S7as5PzfhgKx!V(jnB4l61QYv}AHWpTL`T;u zhYisxUa?$j1_>zUU%*9`#y?h5w_|S!{;jZKr*mO{*hnz_WZ+Y`*X9?CI5r)~l<(y^ zirzzw>$9e@Me&C6*j}25UCDt)NGjpeKqlr{q0{ivQ5MEpT$2f$+2(vR0Xo4&D+0yef4upRBNrU zQu#tmst?##0=;_Tw;oOFNtt}Ljww~LLHDf&B)dM~=GA%QaUnN~lYQRhru?HzedOu*x|AK^=3>hU#I8>q|iYY>HODuZyt8L@?NtU9Hi0iO9 zSO@+QkcQqr{qk&4#@FTcT|TChtU%uqf#62yX;P_4d(NJ~r)x0fKMTyUNoZ&Q_1atl zXfra*D4K7#S>C?dZ!IO2#&^v8_jc+Q*|eA_LT*pVwm5%eRV$iz-znFIN>sXe;}Cw)L*;& z@@yez+LHh8chJzG*rwm^7$5lq!(Y5ZVvV41N#YL5NJ#d{8vOH`cVKrCbPgu0SCZi| zg&=+H4vVHh^@VdMH*)`*r!tJ>=zoGDa6!gfpP_wn)o{x{ZxurI)^+4_-_YOPsNYOVo!DmtHUMfCrg#)-)zfYsHW4MqqzG0Um!hpd_D`W{3F03#xuKCc&wJ;=uljT?p9SG zghfw}Io7X<1syDZ8Q^vm*m@~}VzaQXoqh3rbtF^@7ZYx8mx6CuS-7bo4QCkN}~Q0B6FX`_l1ejCjNPoie^h*F-& zd{o{pQ_0DzWzW(<=3F@qx$`G`2hgj&&KQ~b8VIu~`C{IRK3R09yi}geenw=q(4@G2 zLxgqRuKFFzQPE_TGv$>vJjXLWH6`_F`%?k?m7{hVI`Y|#@fD2YyN}xmI-e|m;#|bl zD`=Sw=rUxNQSBb(kiDwFzv9ureYlF-RIfBYMr(XOUK{#_kWra$wYc8CU1D6H{tk(* zn|*0nVB-2w(zS*$!hrDqI%|xNG+gc*6F~)aYDDm@fs;Wy4Y(~!*6JM6cU7yd$-`rCftt{<%vH~XS&)e-j+Xr z4$!T80)Gx(lwQ@`KEFZpMTf?=b*bp8<$h(CD80Hl7q3b^6ZzGNH35*c+m)_;r0ejyT0M#eEP{@;gQ&&YZ2-MS7p?tCReH1})150l zEF$8z2MOd)ns0VMF)I?XCL&tpAdMTiA*UHtnS7(Leuh-4^yAb|(JRRx(gDfwzC^(O zn5`^EOl)$#IFq>vIV4T0bR>0uor9H0QO)UleDiblIcIr9~uzk{o2Aq>B7H~dH z^V+CoCyStbP_-i7Grn)EZ;YnRi|Hft8tk zO}WZDNT{N~DxkP2BgMBQj=th{@AgPI72Iub+G)`?s zMyD|D#9U)S*X|?MvBFLla?I3mpLB&yUtYNLntp#*5zi=M#5$ro&!q57m7H=z>RY${CNng1Qn{%(VjKP zvvJ%n8HC7EIKC3z`F_8guX5d{Yfo=HeBmJoob(_YTrsz+1T zokze`CytJ4Da;UN-;SrUl(7zvV{rUYZf(8Lq_NEpkn}@Uez4BR>wDpC6E|E!5QbIt zMb|bfT!4p=LGkm@Mg>|dS-0Z2<_`e#;~A~sU<|W*A!?%T{83q@ZXsqtOFmxr?V=K6PSjAE zW@jz@n7k}`RGIL*X5zbvJzE-`EW)i$dKjTPH+5TQV{jiE#f$C_s27Zgj-mQ1qP`UB zdKDQ)tLlNGff0Hu3N``pl&jny(sTcuF6^8Gm#z-aTGNM=Vry(CW9P&G=;=Z*Q;Ws_ zN@ga%pqR&bp_`ds#lo^XtRj9r`f(peX)*M0QV=s_y}U=9zA7M}_8l%V)sLN@@_Bf< zn25Apc1+(X8VDH+82zM@-Zv?$TIfR3i-hs(q2FS${%1AI*SF42hMNR1-%f%D5!=%% z*~EinGEToZ;;KuZ^P{1IAz5h1DGAGe2#9V23qeG#2dE56Nv8I-^Ksp;=ZT5bH_C6| zigG!h+7hY88s}i^F#%MtMAz(0VvP~r3=#n;I|tmF8|M;{aC$yrv)|=1_Dd9pdeRI* z%q_uVv0!r}Sb&rsc@mTk1k@n7bjo`QxZ@cta8TacbZ#GMo4rM_*{gq;oqp82u&%G1 zXc&`ee~p7DIgKOqhT$R24gdTiAWbyS01`)8sViX$n$MYB&r=7Je)`8%ZTXkT6Ewv2pVHK50BSFW1K4Heo zW$|aZ+1euSU;2iSF$taOB@Mrbo)ouTA_b4>6MC%>jX{UJ)K)ZLi6p=&X+!XCWqA31 z`EelU3+;lj;!Q&sSZM*3;LWJIPb{mRqsTpMSWx>WQ zWfw1czSpk=U|*D>TBnBwu6Z2FYkArkWT0))7xx%GGWcz%-*;X>34r{lVbPObZW%pt zbgjIVv1xr8JR}9mM1;FWGbr(3-C_G|@3uMh1@>N+$_f+~?yF|%G$1wme?)z`Auk`g z>wV8}1G;0br3c_mi9@}%3?ohp+&}3};|*6)b9CwER$^Jq2rIqp@K_UFGB=PB2ySD^ zY?EU8TZ1@KFn005^%zg`@c2cMd>-9cNhqU}EqbbuYz>it2=bmm9hM~$suTT;lYM^E z<@u%za+yxK`D5QGD(L9}|4Ej{a|BlF>tbaQC7(lO))KP zwe-_p)gJ|%sZbvx0sw|#Tbn1!(v*=B01`&iZl!OPDMGSt@DoDOS6+7G1c54g!i8Uo zTku?_fVMXjsmGuQW4k^Qwf9S;p@BK!n}hRxcBxWH@36$Kn`MCH3_$sv*jOI<8JoXO zky58Ieh=2%d9D`23)b)!%TKBXK=R(#dLXBcfGT&z6r3}UathiXaDaqb20{iat5mW3 zd!#ydxJ}1c_d$ngN{U37@>&BHcg8K`y>L;Q89`75AFmq~0)AuSUbAz&-bH4m8%BoL z1BwD&%r^%GD`a8onS^?;hifQdts zJ6RqgO=`8^3-bqNXe%2RJY!3(k)O(7uez1tF#d=c7iI7ssF`#xo{Zf+i;E#zlxk)N z3v1^&b!~!?>Z-q$hGu_;mW3L2d{A!x;!9E9hN5@+aI@WHdGWjd7y-Kz;O$JAhJX3QUB?52DwH4l% ztpF20T{BsLC0h_toVGBE{|u3}1RxiW2iV`k1D~3zEUzZ0N+z`FoLoni5vREWIXR^9 zhMRZ}x>WPx#50G6?I6Inl_F5gm)-1=uZo)#uch#w$boIdGz(2)0dvC=bhAN~;%+)u zP-R2946?r`nAc0}ciu>t>!i9isAa@YV&q*aypUkhjq9^@!+9=TFzFl4gKotHt`GNn z_t(mMk^r!po^YRrnora?ng?PFZ~=Nt0#@2iWn=$kUo)$Dr1yg~%p_9GeB;ZZVfpDs zeX@h^>G;CWU}^zC>BMdydFEIS!S2XwqRu@*vA@HiZ|OGWVNw1wh+KyCi?YPI3qV$9OP9n%5B7ScVVk~}Ugdz|C7Y+{pO!dZ zR+07RMC+)U_VFqo(zQuM@CJa)Wu=^7Fg4796i{?`o05`9)4Tu{ipJq536nj%5J}5O zXlh)CkfjRY$)Z~`oZt*>Ct&+y;)Mi4Fg@++PbI_ux|#)8uX-A0U%y_(B$;CuJ$<$y z2;-<<>%-^j-$XQUc+I;lNs0PhS)9Z9heA3<)+_omGZHmpnei{FFSdf2jC1DRFs5;h z+f^H&G4&$A937D3^)EZya70q81dRb-C@@e&QlpNNyn(lsDjBOq(JEEiPVFnI+p0lM z)Gr`s7c6?V^(`yq`8J$|DTh7ri|76d0a3qvS)}jT+(HLqI-k+mEJ@aMn$L+ge?{szZuC^iiggaw0;{{U-fnA>Bcy*p^n7;Z^BHJlv)WTr(1bH$zTx4Wpvi& zC84zvO^BSXmBSG9H^8Rpk3E{ik7jO^FQ$3P=rNPqr4PuLZUjIan$D|^cjB9DJ@=OI z>8FhBDhD&!lg8wATC+O8cC+kcD&f6&4I*;!%TTv5I=>COzCuGC)Ykk(!c}$>ql=w*ORtX?1L&=L zhewC+@5?4_LN~n@d@m1uvnEVI9t&^u_SmRhT#yv=darSy=eEj(jPer^-}R@1ZH4O& zhfi(O_lBzt*Z_i=U6O=7n)Vl=Di5SpohO#Fbd&Jb7-Kmi_y zvy_5#%5g%Fje|DwLv*(rQ0$s|b{fKBctiy$3YK`Ng#cT1Vai}Sz6EKg`#E!xLgqQq zXMOQl<+ZXyOD|TGrFvpoA5bH7-pepO1v|X7Xxz)@O!PfpHF|2uHBWj9oYz`eY=59E zN#n^E4IJ}3D1P!ZGt3BF;>u>uPw%&JPK+VN?F$`mhDiNG09+@s8$<0ePJ!Ry8R zoE0I$qRXCBA~WUEue8vx`-8XXeBA-|

d9~vJ%0{UwK>-_LfXbq|npuifDw^MLYpVm4 zQyxHAnBF6uaNvt;fSyU>EkrA8sRvNwNwsX4h_328L7<@>2E}6y;9Uhv#9nsy;EiH& zvFvY@^>v1Oxv)TrM++eW#ZipR3XSB9u~jhu@@Gl? zW}}Vg;fCFm$0B7l_J@72KRi?n9MqrYZwv1Omxx6jXS{}xe-Ba22>2gP`%V|uE+m=5 zGauWNKU}3n@~1((>n|9ooGKd=HfZ7?xjXM-{sTRvn{S{{`y72(9xeZLJQIj3`uQdy zsP?9BZB5MA2bY%MBfD%?bNsiOOv5AnyQZsS95TKMDmPwmg5C!lRj#sNTLsv^%XN-8 z_?sJ!E*a*To061~qj*~(NOK9YKU#<|UMl--cRHh+7)m2x^z(O?(?-(-L|84=2_5oJ z5be4e6~AD+Lud#Zuvw%z4W_iVAG(O2g4nY=T#ed_vdwNzKqP$w&(H4^_2r1$bg(v( zzbXco;pG=Fq(xvE&?B;Hmq5I~}_g#dBr+nw&XCPw$c zQymO|01lTjLqOaYPR$INn|`U0%XnL_z^@r2T5e;YO8RZHATgTEI)QVZ&IANxwePJ^ zw3+E%rC$uhbts>yQ8oyTDV+a<6orWBHFyJYRCXj?OD+CFBC5P__l$^kTa1^F{2aUQ zTR<>TMH!o4&esNVm(T)bmvB9)*er1XkDk@*Hud5{w9qFQo4-|^inpL(ym|Z-kFrtFr9inCkOxfB2JBsbmvsHLWDR)nU;EJSaapII8&p)g zrG5L~h`F#ZxYB;^lV?q&67x*>jq3orxuBkY663M?<*+-7UZMMRvk8|SL7~PnHVbKb zkiaD|B*DnWTdgOwk5HDxZ61V5tli^vi$*o}|?!DTMt;DbdZW9C} zLW|(gGIoGCESvZ4VjyF9Kl0ngWPSyBgW#X81nXXF79yqXKUyn7{50RN>sI6j%Jig( z)t^!Ro`cC(_H!&;p~{4Uz@rG(u_M-FW`!9}q^#q3_W(*JomWOc)YCJqPjJsF0+1psSTM*7IPvhg?DzuP~ zZY7BN7j5%P;)s&cifBO47-7?m9Zs~nKzb-CYCJH{3}1Xjlfn0KdV|Jsvpd)?3v7tO zl>!z72ufgyE+P!F1+Cys?5GG+j0d9mN(-tOB~ysL{G13lVR)1|4nmj3_`o?~CKV(A zNdaRYKBU!)*f>8B2fq0`nUCNyA}S%l>(qGmfFf$?YouXsBk>;LiCmOVfO79G5v#Kl zdsXE*XRKUBdB6;!>Po^Ot83N zW+)#b(oxa%HSIj?+zkYatD8i7f(^=X>$Vx-KDmPHpX-eURt@^z;sz=SI5@iMqFClc zld|>Y2k(Z_fx7+4*Z`WpDXPAwjG{Td1+i4~{vx)E(JT40f^V*qUH7L@TfJWd`5Js2 zyHop-EPhPat|Yi$gil(PjiS+{W(IF#Q0 z&5;16X${EbpD?Lwm@{JfL}vrv*#1{qkENpnXCkQEN z`4uu&olGp4oh>MHx75p>WBiI-F^yO_=XpB5O*U-GI1^~!AXh=XqV%Bt?Kk$X^spnY zkyiy3PHeKgsT0qiM%>Ip%pmv`Aav;xTN=6UcJ6Uw@-(;e%+AT+@ZK*a)oyf)Uz|GD zx%4wp=K9vGECn}_fhG&@%6}yO+aUb8Fz}vXe(TX5jRl%qT#aTDu|(lO!`7#^4Y;E# z*Ory2y!Dw$?iFGPOi0CVyZ^ei#Ya1%kqNaQvlL@T=8KLaUHm<<;%A5Qi>O>FX2+J` z#WGr^QoV;vg=LJPxvreLN5Q(#QT4r0rV2JJmpxcvH>OL~^A}azO6N18F)wcij<{+^O z63m4s`0@5hR@*Ps@62&2#1?men#*4<41mB2jI9;lk`;atE;wZw@nEB7phzDvHv#G? z3sBU?o{Z6vyy#bc_{Q1#RtDML-#srg(8JWMLeFg?LykA38QY`l2@$YTi(>`of;BBO zRp3?yF;TLZR~2AIyc9uTRitk#M5HiCs{GF&9l}PE2>!Njm*D>xE2f7) zut*J`#AAv^Sasss!yWh*9BPU7lCtCS^OepU)%(@@(rWX4>HZzrzM#L>4spf2p zM~r?q`<@J^3-&^39#-1stk#j>uu7GJ39^JRj=JxKpZ~(d#+Nvn30wfPY+d|J3xT|B ztI@N%crWC5IVvlk{6q}C;KrloKZ#L|9Bif=2DrvUGUk-H3j~9}Y zpMMKVv`F({(3KOg9>|@@b?X0tT~v*hMRurv3To5@Ffh}FP%<{TFQUy4fJQp5XU(Fn z?iwlbNu~+TZei_GiBNFe9txa+I1S$p^wG6Ib)~H0nC|$va&+8^p2=;egE1Lw17<1r z96`$u^pUSp9YrMgdzHk_;!EFhR0a@)^R2XX2TAt-G7if=;+ zS(RyUf|eNu*R)*+HIyhQ3Tou|a8Z;AGB`z+1sv{%Uxd1vc;}g*I{rzIIA?{W=*^!fZGkoyzOCe=5E?KwMT)7M>^C z1zpks(aW>l1@I3SS=F@wEDo;Fy5*22agY5$E%G9^t~reT9D|;y?6i)-`6zQBXa{qw zyZPA`y%M$W**0AA(2l3c*H*$;W)6`^v-6yB0j4$46tM~k-!t#!_W%D?XE zN#t;v&ke6wJB9KyzPiw54w>Gind#$&P#p}%bxhn7ehz%-;h@-I;;*|7E;)9SAPkHn zSehKb(d6B&GVLJXby?~iU~PRSmy?lzSpyhC*z%`z`-3Sj2j>p@_(vWsRpa&#c28DI z$f2%IVeD00_hsI4#T$C&b?`EokCWwzs=klbSxNkX4&N=dqD{o33-K4k|YOed2(+XfqAz zkgRC{e^b%>Y`YzHFz%4vDa88bKx3|pvxx%gp(fl~;j~ubdtNZ=I&1)&sC9mD*Jac! zK9To_(0z>?{V5c)O3B%2RzziIwY1g@=wSTVucq5iLV57FD z=95C@%eJ><@Cm!u#G00ter`37POq)Vvr$ascEa2c(tltQ%TxLy<){(Bq6XTl=sCmq^@zW2vSWN6;8lF{OKc=YO2eaRw&6LVANMKE z(^@v8K)^_QLIACV=yreYzQ;G~-PePQqE!T{C*6|akRZeu1E?F4 zK(gv>IbI?8(?`hCKU5Ljel=MW(6JzytPLT-j55#Y&oNGX5t39W@=|Tg^A% z+S_1Gq{?LtfGyIZu~26yJxmZ>V{CXnMnJ|mJG`H-=U=HA+{Qdu#Hm~7eFS!L_Mdk$ zZ}9C%hw4<++T8v)@_lp74B(dS>dI9*IkDBAd`a2#Nj+|EH1) zvS<4p1tpDRmS#Fh-#UjG=f0=LmE^rQqZFU4&{@%W8|vMs$lWql@YqxH1*;qn6sntQW|T5QHuwg2gMS*Y99Kc@Z~X2ZF^>9*ni5p)bYfWZ4{$1_JT=PI57_wxFTI zTheq<2iIb(9)V!-LRvyB{WY1TxF2r>-6kWT?@AMPW-xlul`9?RX^#4RK9(+nQ+@^Z zQPbW^;iIO*AbNF0ox<18LP#2?47>vQ0@Dd6p#dhK-yh;*ANR_r$vxhIt+7$yRuhUv z=Yt%PfhnOUnsPOKm?+FZV)bU0tpQc1&<{;0kIhQ+L6Oin0-N{?V(~;ZW>tzFRIN}3 z|0m8u^bV7(TG|aq*2#8N+$!S7rbKl0EBOU+eBy=B!8C9sfY|Mq6^b1WhTw>A*JR(Z`B`qYbW*>gzV`_rk&X+GiOBx=^@I4 zI_CTs)pnKPz#FCF33^0>>T!c}i0~rOBiyJYg1&?QpRaazfzX0-8bMXag)p(v=T?)!YQ zfP$EwhhX}&L7yUipptk@y>iQ(2+^y)mrs*#`+|(J02V3?+zuDMdo-K!T-QUu`)JFu z-Xuzz^;^aM1M!Wz)nX=NRo^dNQrz+U(JjGNKm1#Fq_|HYchMY+u~90ciT5l5naqC4 z0W97kLy@5OD=%WB3sjw;a(7O@rEmx`N@F6oNzn1uH(Z#7#ObKzd+B=Fg^Urzr15am z#eUp=N#n`NLUz0%4`|C-d)Plf%?W($T%Q>$N|k9Iw>>k55?ma0#oa*1vDK!RDqTA-#H6p8@U`8^;vD-Zr(<<1 zqX__WYLs4;>5cFvFp)@!I@);4Fqae5yZPF%zPx}r{s883%ZjRh%N+T?X3 zkxr9!q4+J|bh zS!yimxAm+X-BVIX(EIqIwYCut!2dAD_-bjRbkWtq@=Onid+;5(CDs@#lIJ?dC#D*` z_1^vFc^1MfPCC5vcs3Ah?T65wSqv5Dgg zMPIGYN@h09Ma3oOnb|)5_$8=7q@lFxnobLpy;f^a!Il)rep+w-^9{7YflmsYcy=FH+R3aIaU{iRxx3NPo1FX!n zy^AgkhFQOBCJPqos1~}JC0cNNO_2nUreOI945ciUB%Rm{A{@SW95#^|Tef>Vxt|dQ zXbSP@$YQl#pR-HaB<#eG$iYP= zczD@`B@t1k147YxjkP;6CP}Kuh;26(PGr!UAV^R3_!Q;m#mDUQfS+S5x zyV|Y4u(H-8D~%FuV?Y4S5xC{skg0en25JoQByDmcb%F~*WudQY0II`lPwvfwL6Qem zb_*R$k!1!TR8f82s^$bkzC%mc=_i}y4h_%sL}KFK`4Nn-u&N^3^#Iaaz9c>jc*P}L zM1UZfY*F8W4e>brarkYO!=x~5G9EhlFkUcA7?Jv@N{j9%%hfCZJB)(J3!DvXQT(tr z!YvhgFp5oH|8g6`Ydi8eqNB)8TA%6_NWps89i7g<3-mOfGvgfN=m6ERDJB?ejVtJz z@+(~wWhCI(!H44H(S}@Fj@uS`(y7nC__%7NxU;3DRAPqav5z_B%_Mhf68KiWFHvDw zRnU;=n!hlKE>eL${rY*+=dd0>O~reba3NUY?1S)1ZkFwAW+4%(ecdv{Zjl3&^@Odi zitjC;&1+{8R8AE9gVbqNF+quuZ7f)f9^3iByhL+@_dq*`nb~1d1+*6_+(w?m!vw{Q zM#Kwk(?Qun4s&d$94@n4XArm1*^*^E;#z*5XQAdWF5E*81BNz2J602O!bvm8@NJkF zHf7!>RVtLLk?ybts_77xa4q*`?Dr_T+bpFgtE+msEfm+nVd(4WNHF{LXt`=QQLkoQLN`4E5M9C_rYv+lbUgC*3qMIaSA)5np~A``U2M*A zNEmyjMFd}PY>CQ`1n&D|nAzwV=Iw~~0T|$M)?;4S`CYD<+HC5_^P5CIyrAdD^c?iD zD${BFyb+*VXSLRJd0|kcVa=bY=`lz(X;9LUeDJBAwWsbhGrLQw^h`_vyBPn)7f$6= zk--Vj^pxo#q_D^94a{p>O+Wj^h;8Kef-|>~+Q8G_GCF6qC#0L*HlH@7On>+b5F3Q# z!%DemhD1WnWs^Ph z>K0-5+GzQM7kqll*+mh670}t^@A&yGSn!dy@0T5%oAv%Yk>@sw(XLms1y>N(%G4Bn-3@#{C~H2%4z0BpbPjWp(FCkz z<5J`M!`Su&VOjC>d%#{2d0M^a$fvo$Ah?zK4Ze8qg+vQA^j2UvZ}kp3SxX*)5MU1Z zks=JIbNz-b$Op=?&+x!C@J&i2i6~QSb{drsw*?_UB|El01#jN@XuT3QJkB081e1<= zwgu~vRz6zCl^)+JL~IL+szNVd?HLJONu@=y+HCh%)QuXBX2Tu?n234q(R@l+SB@@g z5c-rL4pI{lIkl){HYa730eme@BC=)0nwx8&Pz^s%e}f`Cac&-Xn`oSVe3aNelZu+L>OEATn=`G<=MA|h@R@-}-+pCEi>Nsi3S-y%E&o`&0xB7+1*Q(4- z&om2UH;2yMY~yDsn+n5>lTWJD{fT7kUP7!kZk;wap~LK z9_aSP>82pKTJgzlnMf@S_szj({ib;Nu%Q?K{CxQZCO&4YBn=vfJIuy9qJtpQJtoQ~ zYK7(yfzFdFKThZ*nAV2}u#N`??MDz@d?vPkySw92qHMJU}>A_u>mg~ zqnH5_8E^shsigLzd$pS4R}MSkkdi_9@}sbr%6&e}P%DLO%|a`)#I9i6q|#REI|gz9 z*f4h~3yY(+_Ps-Zn^<3tIg4X)%TH+C>|-C=_^-gt=yvk-8H49`9%*!+s~uoAG_fn9 zERFm*q1f^T;EIj(af<-pZXz-gbPJ23&rc6GcN{d?xfu>IveyrW;^Kd`&5y;!iF%B( zB%_Pz(Iv-ci3VbFwT_eSd%RfIkuA^Arq~M&RL6Xg$A&fpGUA27Wq7sLM$KFVD6w2y zrMS+k>(}EerMSb5>NNEPT$k*ki^DGsZ?jwbOZor`MKP$vYQ)UQX@L)zlz;m0z zQkicV+F6o1NkJz*913F4!xtk5t+?j$meh>>ky<`hUMA(})i3E24&JYxjCq@27Pr>g z5HYDsD(G>!+{O04F(EFkf5Ah`>dg z+YJElXUZ*u0rxBoTEuO%Ky=)Q6>@IKtJ+^t3YmlYAAI^m-dJd_B)(EhSOj_Omt5t= zYJO$Mo`yqIhIT`rf)uA+-T-OVOzg#yo#dH}F}Sz`m*+XJkI)1I>Zd~ju%nPbtL^>& zTpK+#yo<8LiCwgkLjD^j(P-j1he1vOv~&D&j(|yC`g&7!h(ZERQ35DQC2ajN(>G)y zE|x2?5Eee?Oei4w;^-+&+9SfFHgG9~dQ+T#2%CR@M=K-T&L?L8-5(4muygqQ^=9qM zuHS*l`Te7=hoCG1yo-W1nX)C(o|>w{*CL_B#vky`5^w~t0Nu*tHkY(22OHP;skcK^ zlW46gyFtsIMKT3F4;GwMyUk#x@ASSYJTLWDWxEL|xuUAQ`PWhEZ@r{E?Or_g_>GSq zy#e@W%<&ICk~&-`_)bBj-s)I(ZXVt{%oHhpvg9^Cx+4;gP8;W0<>J7>_?tt~-7M7f zV4vM3e%Dv)Nr*=c)AY1*)jsp;D^X{0NYE@8D#lSa!2BgBf)P63w~xs=0MHHL9VnbZ z$;<3D1yL9?OB3B8idRx59R^o5lkANv)RN}BR;sA3_O#9y%ok-oI>kI;6^mc+MNm#2 zV4@866`k5v)6ip{54IsR+Rr}XI1}y7OaX^@CoPK_Q zh49?@1sjQr?MN6X)T|IGhiSV}e50oIj^4*0V=z7oEjnSDKukf!aTwfPU(^S8sMG`t zQ?c|Lt;+utmEhc}8*2ynl2J%%+602(JKOizR$8?)vK)HX=poRkGGh>md@& zofNt_-jV&}!rRs{icL>d*#r#7?}ACPOUu?gl)p}l+YzYqwNe%cCG=`~^sbf#AETCR zE(rC8cd4wdeLDc5S*}Ly5+E{7_(0)fC_4@=uM}bB+`)2o%^)<6=clE z`|pbPYl_K-$9te{DwC{Ru3(1vF~})PQr9`9FoNWHtAAMq-I#oZ?=DO*88P5#XHa{y z6&Oo3q9|mm93U_NopSG5IcCZ18O;>MCLL8p3C){ujOGw6ej9?GmX5v3fQr#TJ&L`) zk*DbniiG)K5?E$1?>5@E*`f$tVmAOY9}O>A5)<_wDB8uot=45I{jjoll#|HT@Ek5C zb^^I$=4hu0A~d-55iDtf(#_uaG)P4%+X-W=B=+cFcqpVN;&2Aix3 z%+i8!qp7Nhe&xNJ?{fQH?Y2g3>gs{?LNTxc{xpH`#nD_=sf@|Hh?g1(7e=`LccWcy z48U3Nn|j{tL5CD|zx;KN3@}vS%1SIQ@?JddaA7u&k2$^e#y${xUS)moUNbc3_g#_f znK4kNW%aj5P?FP6QVYGv5ME4i&U~%aDV)_ zafd>uN_PsuUvzmEj07zeX~Za{2ZV;9J7Ov#KwRuTPxa@#+f7|Na);rft$<=VZ|gWn z*2yS&N(J46=Y>}g5UCf*H(Y+dA4v-3D2K~)r+Ay({PD2jkBt z;O1n=Jr((ALyXGCmh~zyfF=&LZZlkbYquvyGC6o zytlrZ1NXNZVJaf(Z|ngtV|UIXC!X0w)UCBo+VUjT;eZ`&>5t7v7nsKj(+c3P69I`J zFnbCkukc+29)EwT@ALOc$9t`|XEh?^z^cNjwW?*C7d>8wa8+BXkQ5aoC=mBa_eF^f z-lCYvv%>~gU(hI#rB?9*7c;=!>+pzRD$nnGo72;X+i{>VT_tCyx{#3|{@w$|Kb$b` z&@@Tm(kqomZoC@MkovqJRi_g$26sHWjKH9ZzIA~CC%ZzL1fXYwS_QB|nH>@u?a6%K z_qo;Q8ybF3$~(knTg#1Yv;>z)DrAw)E8D{hXnNlwIb+qIVDRUf#zQ=^9n-BMp;UER zp2dPwFYljdXYI(?Xq*S)idznWzrZoy=eoZHzYiw(J<_9gz#WByyxXN^P&KJiL{Dfj z-B#M_B(=!bAV5}|eF*JcC&!=kitiDMXK zJ6dD<7s7kf#akJ5`>Sn-Uq$A*1*QUwZHtI@q$Kbi!4M>0T}6ueduLQckU792$-=S& z8rTmc0$N^8g<*A!TZbNvTPJLFpHP{omxCH)hG>S+B9M18ms?(pHS|}n;&?BbG~h#> z07d;Z>l9bnEtLEUrbf`=p44$(N&E^zZ4d(i%Y2dPp`F~Jf9V#n~j! z!QdaIV%U&(79-wgkNZW_Im(0rcKncM4~h$D?;F+j%1#U_vC8OuR@2(H8X()8Q(WwY zarh$=qNN>q7c+eoT@BrwWz*8-PV}GrO`VOZhJJqQxH_j3Q9Q7hs-PRpGdUYS(vx_3 zW#+;6sCg*E?Df-Ih<_u34wGh|8j-rx%aYBuV^uI5cSG0bBsQH-{xNcMm;|VP!^mQURJCOOkV9bu;L8(W-lFM1Mm!04(eC+u*$vF$p+4y7sy{4v{RnsI(g!oLV5_)@vd%~f?{@23BCtK%16uJl8! zE8;5^TbWGrq+$;UDe1nFV$40K5%+sE@)PdWX9On0#T`(nfL^4lxOUvG;F1I9U`u>y zGfW5zIKowrK3R)dYT4497hDl}y=Tmlv_8?Qhe!+C&Vm<^(Ql1Y(e=r!evMXX<6qalvnc^kMSy<9vR-QK%_NP9wX`nl*6lTwId|1|FQc*9>eoQ zi4$r~AJ?lk92bOGbB4?%d48ef;n|L@x1FbVJa;9#WF-b{G~NVm&K!3DM>Jnm) zHK3H=W%ez}$Cd}^lO(5%&%!K;y76xacGYTVx1JCE<-PV$!ugkqS3x&;Dve3OsUHdZ zQ{(Ffnt8_?Co(|V$h#NB0>%frTZ)`2LJG)rru7zGZ$j@@SGKy z=WPCv__ByykDQj_-Q=nSLYU-iF7yi??x7==lE8szaiXrwW2@k?#i*fKK^p|mf<$6X zdW2DzTdVV-EHHfdJuCBt-!W_IDzrX`cS*~D)Kbn1x_GpV{-g5RJiGdA(*Q3dB$`Os)dq# zX*VnTA|ZHn{lt9-xyKPP_!v23X*3DyOD9C0xe0Z~D<0oP#h$ksUl6Tib)7GCFY}kv zcY@O;Dmd<7t}F?!`YTjr%&kUYcHhSA_-TYa5_2eIyOTMt4GJKE2vG4ADmp&@9adoWZ+>;B`I`@jo5_^3n@zKgyEN zBV$2Gd%~39iib7MpeC+TM^QIX$n-k`2DDG?yuP~iJG1~`?4GWVd7Bj2n$6>4jgXu4TN#_k`_Ts;|yvNOkY4&!jAdn?X2bW$TtO*zqCjnhZ zX#6F*D*f=2P4tdn%R26TpdKW_EUy<2tK1@e!?gll!Fh0ATiP!XjUJHK-nZYb05V^vk>Bpzd_wDi)t0mA@!nW3=v9X*vX(&v8da(; zDnvXP+oS-<`e>LSf(Tljr+0DSHR=puZPy2?ZZ2l7& zpCwl{B}2Ms@uLTu&1=3q<3t61nJqbfpI_sf!qJiSCuTaD7i=Z>6%t+*?%i|WBXxU% zk7ZyhC9rgqwf_m>u(BjO5G|^HFXjIXH6=;JG00}geQOk)qw#P3c>Yr?uNLgR{ionSPC{OSD{P{)Bo{u`1MTVJBjovsuT05o@kYs-lZDcUV|)AJ9nrm?+4rCiZ42lnvyA+JhT+5Cp#D2ke?~nW&v)7N^{tMr%B(nn7Vs>P z;}KnoMhKfo4Xpgw!sl(Kc1zNu5t_y^5ro~M@D%_LIUZA|mfd@s0HE|dn&UJL>-6@W zQs5E0`HmUk4kems%pe$9pidK?ABe$>sx0*n1QybZM&<4 zK*-g0C}P1iAkWJQ0a-3>_N1}TaOQ;%fU7364Su`8nnIwQR5K)PSibd#8P92PK2tsn zujHnBI6mjqJpuc-2%U>Rg_MK&)dw$DfXGq?-)!8v<@jh{PLTbuG>#jN z{$;0%;Rd*#CUR2-=qy2Tp>0HB!&zJ&bPIvn;(?4?h~F1?iw>rX~6 za-54D9`5#(d3~FAiZ#hr>U$m0vH~NFJ^cB}5wA-`GEUiSJcje^9%si^kW^83^+mn! zO`YpgUNfRYrJbvAllTk*sl=u2WvW{4q4HH}H@AjWlaW3q*|LEG6N6yD?$O!T-^CI; z?ux!=XEV<(=34pSR?>4jm&;T4+j#LN(bWAM@j1d#Jklw?Eg}=10gwIxoQ(&5nciW2 zz1LfJpvHCtD?JN1;@i@juedN$D|aVGeFtyzmVv59U~!PhLRm~;=Fz5&a6-KOQppmG zx~Y{_8YddFK#T1Dh>?w9;Fi-Oc$Y9K59#`knWg3L6cKMX{x8o}X2U9U%xpgvwS0JO zB$>3u7Ql=vZ!I)SHw3g(8e_6n<3VTF#Kw>9MQ3J{Q%Z;F3X)MEcP8TltkVWFTWW#{ z-D!%iak0JQRtNvCdVq5Jf+(DL;SGO;fG{{pezv~)k$xAVGJh>Ta(=~~QS0d1)Peiv zkf$2;o|p1aAq8#iLtIv6hQ#s~ocmj*POF)h2B}?+_91uSfe+MQsWZiwtL4XYcqpu4zA4 z;xW^zWsv~egR+LNK{$~#&u_&OnjN@HT_~w#Sl*TAVc+^yM6zq3olQnfF@{#!T2%EH z8XPURPWKh?DR{k5{y;3NwaZsy4!(hSWYCyTU{*q|k66U89sk6%4lVI;L|ppOR{r3D zT+v|5?)h0dDx>u5AAug)c9y{4%GwAyzo@cMagk zBs{jLr7msAnU7teN}|k*(*BM9=y`|CUHM?HOcvE~J6P6Gmsnze$<*-7626C#@>dcE zwdcytQfTMK+jbt&(>WYh^4bEN?vDHAp~QXoiq5x>2jtQD%l*Rzkhp!ttt8}pv%`b$ zPZP#7;@$ALjT`wdmLn~VY#tfwZ+k`Bv8Xg8gG{AHM9SYvW0|+)3S< zc~YFcvB|{(Y^18)+RjE!RBb??t1*q&@`mxR$8d(;qin6zngza!{ygT=QkuYV>HT$5 zxsn`px|`D!KmxJbckxwI+RoV%Jg)IgYS9#(Awy;9d~bHoH!@w{jGQjCM<*PKDyu2+ z$i@mP;eSlpLGsRExaFeTP>%ed-SuXdNY>?b>0CBE?XoTe9nx=v=H@Qv1`yw`luSnN ztB9m^ak8RstmqiNpZ)ldv5!OdmKC}wUqWXk^JB|iT-ukU+$l8`R>KW;lKG&$fDkhtUCPs1z-|`|($j@tyb_z#h7eIAg31h- zNzbR*={~a6xa^$nKujFthnCfxAl80H#Jr3l4YRhb)k72Q z<128~dD;Vv!~KV&-ttYpmD;qne}+U?4&EhXygHGg<=~vLN)iwlZ_G-5;>0FI zoat2U!E7Q_c>cEXxW~_uBXBrCxuynTjrZj_UK5(d3R*8rhPs{K=lMvb-PtQgGspY> zgG;<;zZf;J=bEp)YtW{0aMv6#^rH6J%*TrDrX+189I@j%Te=mZ+K5ViKz*^TJMBS@ zj-wja=!0ymopq1#?jjLBp{;TzW4qk!C2g83z9e}t!xj{6CZB>SRJ{G0BYKr`qvRr4 z&(6!NsBDxSdgAu*RaCE`m-dL#w1Ahc7}rotnQC=eSadDrvYP3ruOyFQwP?u~7(kXC z%@nKZmaOIE$E}2`-4Cy!Si3=}qi{ZI?V^M26Wx>}dXt(7uKdp+b{b!2P~8qk68oU2 z!qe^%Pm&UcQ6{5p4(b}t^eS&udXW^hxbeks<}cwX{A}@e?e$ikvJp4wd{x}W5T#wz z=k^mD{sUKTY^RJY)F7^aEP4v=j#KWQ^g_}aTU;dPQ^iKx_K~?i)~N|2WyD3M%H#{+ zvB);sck(uR8pHMj1;NJhjcwOuqA3w~sX>K!CPQ5cZffIt7xRR&Kz25rH=BKeEqcm6mmHcPd<>{i zWRR)Bq8lt zUqyw9I)~SzXUt!u6^ap-#&r%g*8Z>`%9Uzt1<#jAOuTiSgK;sViQ`%BosvEGpN1CD z@5rl`RA=s2*LE%sRCPm>M^i%etW}_x?12FZr!0w6O8WJw`g!aY zFy$A>3RO3A9~inFKAJq++~w$Qyj}S|IpY<6-)j>8l_Lw~&bVbx6B=q~W}9jVY%Pel z2);+7?c%b$fqZsO(t53;9tisS%aQBfpf5pTyL2&p2P?E{tiyp1-W=d>YgYqhy^e-I5=6 ztZXE8#tj428XCUYUtWx+>HL}BeHeEmCVwibJ4D8F)&P}Vvb`57o5ZM`uA+H$p;48t zK!AUAxk0)}Wn90cblsS9UR(B1RDt+A8BGmsoJ;yu(qQQfl+N8=+$|z0PVVo~X(KIE zWvUF*9Hv9N3+c4XLuJaQnQ<2I<#wf3R*9PFP&uZzd3zm+-ECq$r6#yABY5gGQs z;Wd~k604BZdB)YEze1XtZ$B*)C4xn9q%?=(VezXDiWoK5+Qsz10VM5mK=+=?^Nzd z9x7RC^ZMzu*42!MbJ2DhzP}rVsj|LgdMFA*cEk78n6iu4-EDH4_sZufg@}oTfvF?&AwI0m*{jVE3J~!C;c5YpT?ahm9o@AVwYgMw8sI zR$z7|-hdlAEPru>)K7zP0SHlcudl|sUB~bR+C$qnP-BAHgNJc7*H25k3;tHXl9hOyNzq2|X)W=wVGtgOz^ zw%Thf94e}Q$ma2+MQ6rAaWm*FPGs%#5-|GcL7^NT8oqHX?DhaQ0hA`%k5q(}h^uM% zZJt-LHI0ic7LzsFKA60Wl~MY0K+zRq(&s zUDP!#-tagWkqewH35a?$r{R)u(eQ=u;6oi@qoJtf#{MPFQb1TdrXqq*!}jvIe5+66 z(^vet`_tT$6Np)VO`1QY&ay(KAO5nAC~O??F*;G0XZGb*{**d<(M3>b>zwX{CR)3W zKxcN)|0ltzOj}Wc^Hp`O+M-OXVd)VlebpU#9&%n^GLX$!>ZdFTdRw|kGo}4l0_Bk& z{8I2XFKx<;cJma==m5ne*MgEiBGU5u*!)AE0Fbck6I$lF576l=T*1kKCa5Iz3ti?7 z_M-Vpvoevtb&=i$)+xxC-qK1sHy5MsHzoVFLXyI=f~c0kF!+nCx;8#X&Bb$d=GPM> zz>Hse=&zVG4)6C-wJ;4WoVh#o`+;6bG3pa^w>{Gj@WO{27T9gzODq}W&j{D&f zRc7THKkHMjO|i=jeC{Ur%>hQYh5T3u8X$vB1;m2+QA6h&pe->;AT0s!szI{MYP({& zXc}sNgG1wmS|Qf8nF&WJ86!zPx^iT#R%li?7Sf6i)sAlioCf~lqJkndZv#PrRnenR zA%%A597XO2ttbUfgDZ#mSb{slBFA28G%zoF1H1-9ut;3VqX2(64wjkq-_`{bq5|y13!`f z`+KwDmi+okYD{#}F7$MEeYX~{PJex{a?sP zC$_s z%iqK*q|ve?xrEoHsb?1hmtm$`2POg67j5U9{K{An^8i{0HPdyN37-0=?YtukchT?)nL@W1)@uaE+N~HNfhfaFp z;7|=4!1RPi>hstzk;(DLU~B1Qe)b*`aIGLw9aKq1Tr0j{Nx`Fs+5M@kod7%a6wkmj zUejOkLhy_#4!Tdv#$(O86RhuJ>2#h-B5#&hT>GO6;0m~UTJnk2V4ds|CUIh}kf3Fd zTHAoQ?wsEy%{G?oQkv|ZtMj^r#9op(|k8wd${t90R@9jCa@F!ob9N@M} zg24eh7LTQ+D0n-pq5jCT1h@}WV?p==*}TDwi=QWAkU`H?xg18cfh?dCuZ>Mf{p-86 zXGrYnS4T0QS}mYR473eH?iDVPkN@MJAVC848YsAV{`b+ljIbjST)caiFIoMwj{*5b zbncjlFtbM3z1g~E2Ax6F*k2fMt<1}K+r%;bl4g(|^+k-a^T}!lTFk=Wo*`(a&TKVA z3g{H`--aI=wjSe4C(~JvSOOizM*~q1tEHN4pQ7e|{cW)D%O`rs0D;be%?1^?4@_9V zB_mEgy^((4Re)lD=uWv17BF=swo!vG6}erFw=GsqOe*S`{AkBP`wz%B{`l{==fI@m z(|PKrW@m0!5Hv+b!41U5!sb98Xff;Q#4*$ow#xmTJ>Uqp0wYv3Q&K$?=m<|NnLTa?=1h9FUH?5G>?J zMD@Coqi90l#NeZ-=u)fU{6GF46KhxF%RFEt_Vg`_sk4TUC;(&fSfpFg9Eg{L)!IO< z+65@fuM|cX+SDN?6)oX~g?`Dvh{t|-nRxRpIVz#al-vCz0Pw`s+v9>q#eAuc?XAN0g56Do@VQ3a=`rzS3Uh4Fq+VAmtJxRaz$tmZO&rpu&a>=WXM z;o5>`I@d+SQQ~V%ctjPj(m{ehE26AigZ1w@9A=(CViov&jBgX`)=I&N(0D8(d~ucn zanksP7=7>>@o4wEoj6FK`R61f;1OOqeS}v-r}G6M3M7m##mE}fA|T>@{nqNgw*Y;m zzz9+(^^W{APWR9G8wkmH&+-1m`XM;C*iJ9vp3vc65q35&b>NSIbIgHzzjgKLew~+> ze{&3q39kHjw(!XcYv7W?NmvW4yzpGJ*a2V7pYP-tF3SIRmpG6CSw$6e-J@FZ)WD~% z=yIu{Y#VZy9 z{FL5kG#0hIvMnd?q(A%bNi4alS#lZ-lkL1`jz___g7*D8|7J^iOecY9~TFIgTEu6Q+F61}oZ=Ip7|4_#o>)2Zrq#*lNt`X=*|! zEq#b27>r_-E|*+pIQhssSKb*xBDb>MPL9K|1AYE0e=+n&hECP=rZDh%flQ#Liq!s- z)r1egREUL}7qN(kUp@hAd%mzJcniQK!2~Op|MoaSJo$-riVFVp>9=b=K?Suj5TlY4 zR4@;#LWmd>Oad}@3KUMBUt5+!cMg1r0L4?YnfjcerE1j4AzWgkFOJ{Ipmc_XRX)Rb zMLl0?Mo+AIKBH-JYZwCV30-)b6=$$>dXnsriyXR8D0OOkM9tkWLNw;VGkR)!q&T~s zSprWZw{|ZBm)1c76o`)oT(Je7YjQybVT8bj3Fnah0XgOJg?}Dl&f~ljdmmbdp}mmJ zs>=|3FPXnER&~7qjFwe42SBfsai)RC(F=1Kiqv7K8>^s}{cIm9$4p49xT>_3kP=tI zM=tL8!o7%UF9~sOh6l8d)Q|njZ+z*4eVCIjpf*3_VqjM@TyvflL9F|;k30RrLNHY) zq2|CikoQo%P03$i+$Q^HD;2?O=76KZ?DL5GOb%%`AA@gip4}u5C-KyqHFmD@|HA?Z zJ1T%h4zz)SK<;_%y;3uFkQSK;euE2=g$;ho~ck8XN%lL6h+r=eUR*94>@u`Q*>7-BTP@G+TpR6YTbU% zAZoWPioKCzr@sJY01svJh49_4kxW_NsU9M~U(vB{!$u(yZW|Wb9cMVeLaLJ3eBRswMcQN%!`o2lppOh9+ zI6l8iqJWQ+4Ig*PK-2}^R^nM4Qms6kpNeq_rheXC37nk6*C;Gf!PVl~h)-Z-`sj;z z4LaYF(``gIB~C9JwVF{!Qs_O7d4eQZ?h-30VaN&~Ydwv3_PC}yCL5^5Y ze2{k7 zLy(*aI#)O-z;H~owPYu^C z72N;Rw(72{3uJMhrD;Rlk>HMjtSbz}7b-5=b92=Zl!PdR-t~4W{=p)@LCrxQY@X-| zU;Jk9O(KR==?W*GmJ+YeuOs|x$GPH$3=A0OZ7-v1MA*)AR`F(< zkSt?($G{cuJNee(Qf)-mTyZt^q#7#CbaZ`phg%3yLg*oyTmWDH=o_8`YY{{h|I9wZ%)?f>uY3aGEeWg|Ek%T3YU zz8aKmJ|1{DxfSi?%a5bZinZ&=y5`?b&R<6=H7f>`b3`a(as}1p4(bFBp zJt^K;A;D0zZx`{$n@^Z|BrE~3EMs=5-1e|=E0$j6Z zp{GxV^kyR=3uDq+aAw0dQipFeHO|&Cy6K3LJ1FvhXnRoXPdUe)xo1EVyA=t~j-7!( zJHuCboL!^7PAGz}`CES7lK_J9=l}tQLuaWDVojxrqpEU`3n%Zv65c};TBee5y2;|Z zZxeTJ5=oeZR+iq2HZ~r5I;`LTSw)@hM!c+(&NLd2n(cGE?Tr2mC615 zz*MeHf}%XCmWW9Y}=mM<@*yOHBPfi*;c_a_VISvT4OXRvJLI6dv*537o(izpY5RndL8SpENscp2{#)dD^jf|0fzR@wtiuIOlQf)kMg7UtHOIM=UNTn=)(5I%c<6$VK zRA`OP8r->s4hp!*sLqA&U#;H#_+=V94#s*J*WI`>f-9FX&^2~f=+z4-QqXZe^k%8s z>J6%PXY|VY%rp7pZ(916SF}C-+|AaP8BAbKa`^!Ig zk~{Yf^cLmefelI06-w+URSz>b6V{MehfAn?V`I>pkz+W!Nr%(S52K=Ok&D`Fo`azNNMDWX8 zj_?j0XZX;2N&nN+kE%JJuYPsC>}VxrX@@-dW=4ySvT0QxuvUK~2^9+t&dW&XfF znyJ+&?l*D}ZY-`iG&-|4f5F9^RXTISoL;>+SuDq^TTX%!y}>(5)+p6BM8T_yi~4_l z1sVmUByZn)-%P$VdBt!T-Bn%1MyFcV_V+wuVvINuGmf@>tcKX(j5Xr zaySeT7b8Z0jY3cmH9b9yaE$r=MjJdf5|k}{{ff78;JDh-pPql)7j>cI;e0X~uSMV( zmq)_+k2mc1`_ciYFf;c|JHue4W7O*w9VX$~UQU7Ot_h7o{TJBM%;6*#A~%RmLJ$W1 zb4<)}w%mbkbV>G(?o90kW<_n{h?40gd(_sAHwqzU3m3$%F zB16yUzk|<=hEEkGR(s+(yLXaVdV;>M3%%gx@^%!?Cbh)SoBZq^h~f$*4Q2{1W1ik>fdG#s3W08mFuu|Al$4R7MX_+RMW4vn zJ1G)J6wMeGMeXPwlMe6b9}9GAN+Kz3ksOY%hV19yQo5bSH; zfXi04C&`ybo4r#oQ`LXtRv^*AKzBqEVNKqkFC@WLc~ntCh#CvOE7#-=vGswW!VONM zz}M0h41wwYCU%&AUN)R zTl2w4lcd97>|J2bSCPcafPqc$*5QKC8f0usL>-SN#NL6($799;5n-dy=A6%ErXm+U>1AZhUhZ z`UZ@iLn=%5sTru6lzO;<}TAGIRm>jO zV7>jam+sAG5>d=qQ_&>7@7AK@nV&S^w}+K(kYY+^tYS-(*qFw6PA%%RZiMln?JvjO zf!At?EdtsiZk)*9xlCBy(h>)pm#3UJYuy9%m?#VwpM~YYEBsaHY%=TXJ0Z)=iI1F# z4Kt0tf6fHlb++>u9mJgHb>|E&VM4V~F#6`oXijnjOjMgx6W%nBObV@KyQTlirAL<% zycWhBJ)hk8ozil*JyEvkV#F148r4*$@Q&el-ccshy5_7zq{G!#l~HfY=7YJq@|J5? z{;m}YRK}N*rWnn__;@b~Ox49s|2l`^nirT<{xjsZj~sG9pI^m{9}Qis)i|_s@@=M{ z9*fo_Rk)kFZANvHrS#~0m8`1U$rY`l!WiS18KAbS`eYhse0sKn18!h3ap>Fg#?@i} z8xtck+zq@>3IZ1Hd#LPMxQEoiGNLyNG>|NVj}evrXIzOHo-R^99sK=(k3z;`W*F%D zLNA;?qXB0uV$I?i?u<43Cjm$xG}N$%XsBhPLqM=rQXSi^&xENSW+8)_-E@-wPk#!` zdWkv;CG=;0k_KwYRA?ESUFs3C=vK+MkW!nM4-d)PaLk2%tEl2f4Kf+XJ2=k zTVO|Ptf6>m!Rb9Ia+L8cA%R3^ko6f5 zlX<`WiJ3+q;EgF?|H$M7o{H{39xgfm4T#w8NSrC?tL&t(ueXZV?*yw&r@Zb|JM)fK zEkhG@DFxILg@kLr9uma~y;oH23jDW@qe?C8HaK~0dEK61su*>|)qI*3TfV~#@6zJmXdAYC_ic^jq^blR?zj@2(HK|C`_Zk_UtCCP>XsLFUU*k76pM85*L35t-;p5 zy7-x8ypE%0D`R5LaR2Hy@RM7-^=Qo4tKkW$!G~1$A$dHpntew$llQ$K_?<2R%UP#r z(LcYvqp5BUZ8}W4cA>q!x?5q#TCha0~^_IDYeCWl4iw`0yjB?5~bouFlR)Nki$Js9GK#mB4Qn&p-JV=o@>^h})Z(?3H)a-l?EmIWvc-Z<;+Q874Ipx#z%;fzR~m_h-U= z5f-fgfS;EZ362Em!ROwa-y^wOb-M(@8M957#WrtLe#L0VNgR!gyIYt*Fz_+z<^V3{DC=vdrOBn#&lNYp9(NMi$^tTR z|9YsHvzXnnjY4tZlKlzboI}KJC1nUXxehq;JI4$`<|s4>(0~>Jw163Ew-5B({!&JU zKt!CLOy&b`*@=Vq!5Ve1TC{vyG0JH`(UYq^mt##Gr7|+@QikKv+ z5cufGseE{3=+HxoqM9A!Ta*`uM-t&EnS9t&Nvx9rpR_+<~(iN}xT z!$GDa3cUqUZsL<{*FIylq%C#QlKIrQHRcjp4-s~J{sk}pmr1>(^oKkcG=4CAODEx2 z|1D(tY*cUr#k&#EZ{4K3Tk*E-;xerGir5E>-XGt?>Gi(Fhq(+a33Bxk3_plpqcbx% zkFOz?3XtHkC+%S4(J_x@Y=8DnGSnYg-g=ItBS>Ow>EWnPGy0W<%trK;2fy*R4J`a$ zAe4EYzS=!p#fr=a)Y28j%vJ;)ykaXEuHDoP7z5;aoB{fSCTG zu95zl=%{u({(Dh3xMqL)v0VRpy9C;P&-_%R+$sWs6k5WlC!#p!56i5z6B;-Z^{49d2KSS@J9 zAHL07@dPMSR>P{1Dmon^l0fFxgC~~ImyCz!2yTU0?F$8yN zk#ts|Q^dE@|HkXE=fe4>KdKSgdR~Pg)Z<;NS;6N*T4mVcCiB)H;t?_rxu9o^9OI~4 zoVCyx`oHNdDs5V;74uQW{&5?|Q(US~(8!48q`cB2l)z$CyX6#3Pp?Gy>ea-4c^iYu ztL5bg2>tU^Gy|#{i#~aNMX}S$$ZrdJ^M_V zDsY1k6_op#A%bhRT_DTKZ0W*HRzNWh za2|i9w;&0!zVn;pc6_~OiJv8}Fmajd!kmuQn>V8ITc96VjZqa_mvrJI3)zGGy-;w6^?RY_?mAzD*Hi~oR?Z8>jk%4RdDJElTuc4-nyLOXx9G|Aq-ZdhRl4+8nWDH* zl^cpqL+psX-x|#1(82I_jO0nu$zpMp5L&j+*rgVi3AR5h`n2raP_jw0l||TLVH0^Y zbgQ|{5b2fkx^jF*^-^B2>8Rc-)au2)LwuR_qzrbYqf=*p=BdRkB=zt0=i6AaU4Q}K zo*83{c67ka;Wi^Zt$uM|btu1Y5)AB+%Lx~fp4QLTZepsR9Usf#PVx$e|B>Co$K;nUXtx3tRtMK zdG!0XAb8j|!XUk-+~RV;F)VRRhVpIq9Pa0t1N!aT7`PJ!)E8 z2@5p#6xpDHAoo{vvHM;IqbQweFJ*`3SAS2Iuj@uE($kv}W58Tl{j)NeVTvt(x(AoQ zq>I!xBBWCj3$b4u)3d~_`*gjh9u#sVtkY>=sOo1aB?cz3IY@8%1`4Q2sQAV1w(vlm z6+l-9e&=rxW#THFFt1Qghj)IzAJcAWm8zJEA=L&HMc)zAIYAh5y6^eq_H*Xx<+?0e zQZtzO9{ehAJQLl;fp_UfdL-0@>(=FGUU}ZUK2HOB|5a%6?}K#a8Q5I)PQqWH_1BBh zIB8H+!|Xi@?JqTjoy;daQ-y+SD)o?)7IniwYKl`tqSC&7PeT(d#QcNd&YyoW=!ogtQ(&Co^irP175;Lgq5mQJ z(EsC8e!CGiXIl^0fv|JjcfE&e6{;0?WT&U4rVR8f&67|X&T?baPP!?sUA+&!08!>v z1jTso{9SFSzzK_N3KA?%9i%xQ`|5IDVHg=4&i_J&L57YsaTCy%Ou^1L%j12>zhCo` z>^NF&A3_O872$Q{55LRJ8$Z!WiYcPuC%dTl%Iw>x$WXbT$kXTbF+fw5Mb67kbo z_=Z2d@&flGw2q5snKty{uQU$?KS#Hn!QWXRcoBLggpx(_3zD)+n&y-2uhQy9dJ()P z7>8|)TC}Voy~xh_g=eCF^h|3lpWW(o4jR?Brrrj1T|1H_Ew}QY()xWq>GdF*cz|M&?Fh~WrTx*B=fCkJ^_7NW2NFU4@r%cf`dDljW)T?^xlrGHAN$=- z7@F)ny&kTmch6enW@JTsCDXYt^iQc~k9NAmed<2q3Vd#()w#A{_C@KDJl;JJb=(lg z>*pU?LJ|J^>hn-TNWt69*Fu&*aTFEL`dbf6ed3X3Y3H7GF}IcddRIFC*F$KBlj?pm z5K5iH;paLB(F&(-vrqWpIycj77`>ahalk(F?KQ&hA0D^M;w;y|T(NHj1k7SQT(hCx zQj(MVs4~AfL7h;4hivDSV4|%B)ArKGRMJq<`H#e{xX++BL1CE=W__MH^AJ=Q<1Bpy)97YO7txUZ; zYxBCkcX|Kgc~C{dnJY#M6~ylqBG;2DD93!}FXIVbNkI6S$JCt%^N(W~CRDvWo=Oxq zt>NJ|$6Z9GLIwOuOYB%D3DK55;1%DOyD{E9SKITSziLO8eU3TvCs~HOQZfe-P`-#! z5&GClUpU?M{8lwN;(O_CtSNT% zyBJgn4uA2z#eEU=L4mX6$K$Qa;q16U5bz3Ox)hTpoZZhmj_q=$@F{V#TniNd6<5edT@Py zKu2WGKFV=c{g+_tZj>P`o;bYsL>cD-=U+xFUxEo<(`?_yq^^2*>+td&F`iaX@ z*Ri(Na(%i4UZC1y~*K zl>g~&RMn8eN(X)M{Wf2><&rwjdO0GjU5MK52v2z8ZjADyNp5K_3s2XLXnV$ZQ`$$r z16z%W%){xn^tg>7p-wlS6Ktmoc9miZ8kWAS{KJ1A%ru(!FI^SDYQ?|3^U6h*XXK;r_0DxU4?xN?(y*(EI&NZ_Pq)n-KAhQ4 zqkULG`hAFG{z9h@9?ca)^7p$@6$!krcvBjlTHnMrkL>Lvwmh(IVwu14eHUzV=6L+f z{ld8AwZcnew9`ef^%i`!R%6Ehn$Ijw`^W$Tc7UCD%X|(jjxs*W{pE#(fG-djB?=8=HVBtux-7uyJN%_w({hxfdHXYNI4p88&l zi=~_9({w!ZOvSzYc=;WoC>`t(lEBxPEA~UBoV`niS6%a39J5~nKs-Q+nSz))?@t@Q zl-Owx{uY~VNGT8V@U)rY=h!(ZQQVK?$dc36VF`D!s-rOlz@68pZ(|R#h*jKs!F=&p&@Ta_j{dN z+s}|xGi^Wd;kLc>LYHZ$eW!Nk4;{-}=NTi%N{BYkjrK*Ze8Oq;sJxR$nw;>!lq;>f z_Z*Tr1n6|xfrxP0J;SG$kjQ%VSF#4EMICevY5hIxG&vu8AV=Gv%~?LPs?JEiFmgeJg8QV3={iiW}nD5$n z>!e&)Zd6)U?d?cBxB4zioN7e((tt)v^8o3 z5y~w8nX1~IDT1IbA`Wr)*RX94eo;%!(VeDIw9M)3ep#`)FwT(z|SySf($K1O{U-I(b zK%S493oj`-3FO2RKHVkM2t9z_eDE3f-rty2CAOXcW8vHqR%yi?NQ@h%ES6Lv*h^S=Zg|o*?auz*J9$_S%E@yC*+H!YFREKKY8G8ecO_Z+8dq&(xjN=)*Y~cBvN;08wkaKm|=~O;JM6wbZT@Cj-Ak z5j#i7Pl2kueP}NhbwYI!Q@T^cbV@V=@X*$y!NT7D1U>AiO>?t<3R3}zJ9^cY5q@i| znBnZBs#|DBdKeo3hG_SGkIw7_LVT(iuK=?eGwM?ak-`6xRm};67)Z%GavR1Lq-UZv zUq6pEWz=2<;+pdU79wlXN#=+^!9lEXRga-(ZzAFhg%wl6%*)y9Zp1z0LryB8 zdFGM2H5OAis=-66RtvfMN;OeSdd|zsxa%#?iVOV{i#KV4vrpzhdzxaULK@tr81RQb0mdK)M@5MM_i@6_Jti>qOq%jfxJ%jw>vR{#-pZ-T> zEvw}4?8|8Ab3cKz?{swb16GCNe;l5D&PSNsD4?;LFZ$`YA%7J4e7aDNGw$ZpY7emg zyZS{3{jCG)@6<(=4Xq^NYgh8 zo+szg^OUWm(o)O!E%vXFp*?!2*Z=Px>NVOdJPV( zV}2jkR@jZV${y&=1z5hkcQ7*;60|A!pi&&5;67-t+%oX%vB}Em1LODE6&7e&jC$?O zez5p0)>M3@qgZ91L1z%!6}>N=8+jC#-!;0~U5cgE8GJMG5vDhTQn2NwV(?jyX4csC zS04Ind=65#2po#e{@O02m{}C`?Yi&HTPleRj!K~?rfT>*{*Lds@xZ1`$1c(v{E94y z?!Rg%P&v`E-<%Tu5mx@Rsm)x2(@~tO<@UC58Y+@J+^Ls+t6g0@6@0*Q+`+u-KA~-O z_$~p#1K7k^wOn5W4`w+dK0l(bamdV+eTYA5mWq5;_pz~#5Kd?`NTt^(J%><@pS}ET z1dd*&{A&Y(lz@R=oJnt!RHpD+>9`8stBZ&{OVR* zKrY4blTjKyZzr=9;ZlshH)Hsc{q<7eKNA~x1{ej_(>K)OzFeAKt1KH)JNP4e{$F_v zEBE609A2a(a$GWPHiLr?%yx1|G6<^H(2%*@zhv4nf7QalIMKN3s0HJ zdD7Dr9kVmp)3z$wAub z&xY=}G#%{^AM5|;^-;caSCB4OW-LdQQNbKjH)z(whw#6A&TMrTqlBqz&Bk1#U7vE( zn`dQeAQ?|MlJOYcaeWl9aWMJ<>7-T&4L9 zk1`j@crHEgggFCpZ;-JL^>dk1)rwd%ncQxPN+Wz5S`Oe^hIyplUyW*+=bJi<5Ylly=KG-S4#_@rX$$V*b;Z=;!N3(xj3hlTSc}( z{_jIxtyL-C%|44{$t8Iiul%;neBpu~uUF9|3CKjj4V_IwqTO^>+7v?K4+|R)Z^l#CftmFNl51(2!~t;i zNq5r*1T0UHAv3E<&^1sQM^hh%V|?!Uf28AQ*jaC6D|}mh`2L@7Q>B4!xIP>CHiO?~zgFq;`1FU1 zmo-9~4L?#Avd3HagpF^7k+y+Rb(-+}lr?f9WMz+P#%@2`!O)$Zp9rbOkNhOWYyQO z@_F0*9}@rnt-Sm>7dO80ZdHKQ|0RsP=HV~2-r@_+hX&@g5ey-l>X~j33~-ih_Mb`P zaY;WVWcuSw7@?QyUcNXwKJKW1#)UxK!1mrMY6z1hEBiecx7GXgi)J}869Vtofs1HO z1MsZuM}`_X-*0Lrh&CdS;-dQQJYkW;;PrUnAU00)*U1~}>TC68j&`j|P~8LYNgo7< zz|2vb|LEOmY2rWFxhwVOy!*#1{jY*4JU<_P2TPN! zLxkD5Y1Rv`tH=voD7ruP`hS0=bdw<#-3S;IKJ!z(7t~bc5*Fd4H0Tz`LWI2VVZL~t ztz7^gx8Te6X@MjlpGwP*mKRAdkK)mTWFxhf!+0bld5w=@2KmQ_p7H{o4GRoR-iOmB zLUn%+Opw(t6vNR67uHyiAoYPba_}X)==#b>CrfS|9D47YezW8<<$U;Pj-#sgqV0~J z+u>!fU*z3u{H2wtU)nmU5;|sWXRc-^+T?=!hBTpmbi+^iHYi4U{BR2zR7^Wt9pb{Y zX4Pq8(jR5kLsJKUy@cWbN;aL>SxjLoZ|sn&ZPM0C>Jp4nJv8q>*m)>GyhnD z;$!w0ETsuWc6(jeG16fvK#pM1?{%6=FQvsl8@EQ(B;a?I2sSv#-S3ui^BB}t%P+h3 z_cDCPnaZ+a6z-u|#rLg?KYux~qE0Nh!e7l#Kiv4!x~LcoZ|hs)h0d!ojYG;+)<;#j z!hUJhg~uuV4E(ZZvopF9Gk@;fk|!=?GHV zehLn6fpFROx(H`e@qlL*Wn>$R#(jfYJl?Fw$f=M3#Gt9MLSW10&2NV8`t6L(JF#wB zJi*6^Gh`x(;41=_0MOmLP46p|XSGqJ-7}#(?D`lVqmL^Up18HzE3B{oaLu%nC_H0} zZE%j)3IAb`t+KnCBSeL&tR-gCVu4R<<`?~i2vi2mQ3Pe0|64aa@AVo+&K3jigVUbB zuE4+iZp`JrnP6?@PI{w*YA7li`AEi!(B15~}#edGJS+Wk;?0=KkC+WxvfOp`<1h1^>|og{js#+DMaBl$I~Vl8F;M$eu!a&UYkt1m>B3tq5WV8SapXfd!p&g^u$ zV5`_Ko?Y=uiIr|3t20EF9s+pJRcVxf9YBur<|n@%tRm6#5kwH~;QaX;?I{XutiDk! z@~@@guj=~1r%lafLXM!t-7`5%VNHJ{5J=y{SR;#9@9>cOj4g&!aj4-?wggo4<`AF# zSg6!yesvKtEtMYZey&TtNTqs)bglkkPp8)av`^qsLC@{x85whWq6edn9(zZ1ZCQd& zs{#mpkaw)PYk6_Rl40hgSfdiXKF;nEBztuw5-K>+}%O@I{(n0Pfebor1yFx=tCW2IU2e1u(?nGd% z_s^foJwHu2d>Lxmj9l_@$wo81^Lx+owfyYrDfXaC@lA{PousdURj57lmw~_wm2hk5 zAV)1r*5k1QuHgC`8$=kJsx`Y;XfU<*XxQX(_~7_Z=Is!;72(AOJs+S#UM$io`wnKR z%Adv8>mStkJh`2HQDCO*6E&|7?^)z-NbRM*Wr@d9{}>%*CNyMRe-CCa+zW?#HFt^Kw6XdRiM*D7aZwH$| zd*|@=tIsmn$U1|A>?2t3&~rt84GdgeDJ2*EQ_LcHu6PP8)O2yqAya87P$+n0=`~Hl zUKjFW*j?07VCO+%0WAmkq$DU$H2Kt>wrW)5a7DxQb~h0bEgVrqg`4%xD6_RtFX$X) zmBl)d6=;7*^BANo|2X=Gt4(4-QVcdkJo;meFb@I%U|K^(3bt+p65ecti3xx^^{4|5 ze5R{RI`$Ab`dK~Ve&AsrBW%H*aD{$4)}lbwF|qfbN$(y74oe1NJquSxXIjy-g-u??~hrG3NQY2 zh-|0lEsJ7JU-(h;K-al~m(t4=$Lf=+a{W;JIxQR6myGu<_#N!r?9FdfgIJ>y?YRfx zkvGuOn3y)qf7{#%L!TUH7D*S8zI*|~B5+P!{nMv^jEE0r_CHA0dlKE6n5Z)_nf3Bj z)Lq~Q`T_VKtpgrLXmP*Qy<&8}Y(*pNu`_UeSS5x#A`|OVnfBN(f)Slk zb=@7eH0R92`3k|Va_9Q%!I-}`CQmr&4gEHqeP>>EF^ zgN`(VVMA8g@1%ZX{LB#_ea~B_#hM6!DyYaC6bF=8y24L{!49kZHE;v+k zTbA!S#n!Fm+@UMux#choVPawe3yPe6YE|@)FY%U1ri2?XvnxfOk7hp-7nis}t7F?Q`qkwr9=7g`MBi+s-{@ zpX42;H{oYkPB!)(Vnqo3k8m%rJ1hy2t@pKv4bGmJVR6Dy&BNvbXIKv8xHDI5Y@N+h zDx|`Ty*ST=cQ~x{vDNZ3Ll@v6T)g4ln)!F-Spg3zr;V-{*L7E+fQNI z2h{0hDJ-`{Kx$1fn7@AoPNX3RI}%Bulg5?5=U9ZHeKm0*M56at0-kM0 zGBiOrdon&^RK_7+&#q3RjpX?ih=T52pue{0dsb*>&|*~EI-I}lG1_AV$^+IwUxUK4 zm{AYulWyP(#+O%Ilt0{S+m8Touqa;qD#10-t*00Qeb7Jk<8A+G+w2=EG&Vo^3A!?b z>8#T1jdV1S`q>2Uc!U|}bDkb~14MZC#0KwD>4GF)+uUpHBg48om&Pu6Mt#;z2S z5+CZ}nP3_XM+C!50dM%eD1nNp69o?Nljgh4W7n=pAqoLFEgOE5gYieb@qg56@PA}L z(gE`24nr{ty+4&ux#TZa^y>cp-8kXX<9Uk>zCgeIt$Dr5?Rcx2)Pr3s2qcch@H9&i zbGdog=-`BO@(w|_-12n;1lRwHs3XiCL4ZHQuC;FR%hYKVdmL&WWsZ}iCxUgTpbM38 zD7(pzI02u^^LdN^raXMsl2WjiE^mKy;t(VO=8*iw9XC}y^seL09Sl?YF-8eN411bl zot~^>dNVW5jmL!Z=|{L7rMl1k9J{Uqy%mw-ylau$nb}Bb^4bdFEWL8GFLIL|KkTdv zUN{$jS^>~|z!T>qrBHjYLna6)UuOwjU9igCMnG)%Wy8-ZV5#={FyNK4eI8Yb-wh;a#V2k&Ylp~Y+&gvjN+8$%gYZ%N z6PVknOZgOkvp!WgBR?Jq&69}CRmivLk2d_jm6ie)-4FY#h7oQNi;!~0&2oWn`wS4U zy<^h&LE6HNeTZ%yKL$Z=GAvhre)}69DOziFlDF-XU8}ayo3qjwTKc)|ljf3bcX!me zMfWU+Lm19x)WYhE2?yJK@I3W49=iCvoncw`lEYbK%c=}^8v=_@&(H>RqbKTj>}>QZ z-Nr{GfO_+*%aB>KoFA_2x;e1z&m`vPw^2HHCSwGsn)%6XN%LeyA=~C_S!GHwsn&1O zdr6KXISNc1Xy1s@Iq%%fK&SkLO#A+|8=(Yd`lRNhacjtUh9llw*N~@_0o4>Y=vI@7C|0v$2>?) z{@uvJxrd%0w(%GaZ`mi|%^rf&;c@T>Nh1+mLotq8b?gHBwDb}_{AITtE`J4@k-~fO zUs5dwq9H-u_X$Y+tIl=pOjMNjmbkQm(3C{8`SS`K>DnkeVh zPgRO>@-peoBv)BAWdh7`oy=$p4+6>!z;E)m<=49=+~~fi)PwqI0wEJq`(YpOY^}+} z$?Zi@Bwc33jej&Qy@)6n=0UH}go$0^ljj935c6C?M(2XZjIeg;Aeh?PKT)Fps8g=Z z$ku)m8B0if)GM_rQnm|w>>EpUd`Y@aH|B$e#a|?}CHK!#5L{+5HH#d-d@1I`-@4rE z#I*H;TA{r)ZK7HrjS8 z#jPNBMy0noc)+Zuq0EVvh_+xl(S}9f#}=p!kTPaY3 z4=n+BUA)bzTa6H)A(r+eDPR>JmaRD;$zQqaJUBSz7w%YflX1YD{7Ma(FqSLT`=^lD zXC|Ty<5FCF5AhTT)~ft7)O=9gr(~8pa{9$GccmPB?z!75kEGx4rO(o%I9cUe(7@>e z^{nkMyH~vUl%X}p(eUh-rPo4HG5>{#jdB;(TIT->%MV8)ko)t@JwMU!K_&i<7h~3x zrqeLpw-HsA?{TRre40&+tzANo;G(<6dU3A}Z|q;7K^NBh-Hj{TCk{<>3`=)`R9pwd z_LJWl*CJfGj5W(>^h6IWL6petUx-@C9gGim)KPyS|JmSAK@U*LIUXc|{I6U*Oe<`y z7q(rj1gu*W7S~WnY{mdJeSebAyZ8RkO=jlETo?Tya`_AVcOVTc8GIJD3J(2Wp=VXU zF^3d!0ETy3@R6rJ964DGzj7(Ug5Tc}Nl_3T42q~oDr5I&OES$#psU>^-(L!IvH(rh zhFDMa4@sZnzq!nq1^{IRxy<^?-ZA$}+lnBQVPtB4pH!77orj>O04V-4=9E7e+LBuD zt|?9X!Ii&n09c)r-L6a;dm0-Sk^fZ)a1i4^c`^wyw@bQye9r~O^ysY`xovwq)+3c( z`3An9gU~moAwG;RP-e*&P`LKC&E)-e@)rFi2BDL~Cy$SF$!s+|emvG-4vr^F{K;5_ znym>GVKdY2qt`XZ)k~fHw~JJRS|rM9`a%s?;l$1cOB!oqOnsT^`c^Py>060)U-S`0oz5nPy5@?2u1Qg#e<-%SEIDle^?vJ|XI`x&lW1^u_T#Ez zxLTY9DwZgi$nnWUxv*7>6kYOmUEQnZJ^shScu`DLjq7kjB$(;n5_LyArDsWMG*)EFRy%CN*DO zmDoPDLgT~WM4F}D#jyLR^YZAi1C2^^BXdbSzish|6D;VFsi1Qcd>VLE->D{1g#`%@ zRyBj&6)#SRZ2P?09=Z|{!s2piki&@EmLnKh3e`<&kNy&>0!vC)&Ltu1@V&`&^*LXeY#?QCKKX$V8$s)RPe$b28+ZgYXAOfu}y3b_Ujg7ykts2q1^f~|?z8VY<4X(v!YC+kecfC-)n-bnz zNLKR2qg$kU%?-~$m$rubi35kxaPM`q!pK^BT^lHyI~lq+3FS!GNNmYPe_`YZRr7Tj z3{c$6Q38tjAaNLs0jY&UI#qS|Zs@*TGO7tx-K}VzT#fmD@Jo+)LUS<9;S)$NjT9|~ zV>_?W&Bs|dr{913dp3YQ@B1%JM)Tojlat5kDW2vca^a_zG-)J0Hr&JFJKm*|NpZsu zd4--g@rR!bXQaoUFB_H3nz(6`2SmiKP>L2|83)PrFEp{I;!TFf^N;%7XO76b)azGx#y z^WjQ6ZSis|;rfYyBnw;+Nx;uPoh9~_h9hb{LFD%lXJHA^eZZ6J{Vf!9@yAd3hNWI* zb_y;ICZVTODB`m>z+?(4v(jJU8sQ<(OE=*OjVIubA7eq^+vK;m2(QNdweDmgSO1Lit=SDM<-nL& zT(89OrEZ1O;>sNvab*fDL}C*fxGIz5B=h=Wb)dx-<=05Hy-67hH0 z-Ab1UX=-E zBJt4<>fqH-*`tcBKWU$M}%KIZ~#E$_#YaYp8! ztWke=IaJ+9H9t?z-!6!tx1Kt@tOu7@RXt7=5cKdmPNv3(x>$1`qe_4ImPlQA0hY@w zCuy#fSZ9%3PhK=X+!W(JBz@fz@J{dM%jj=Ity(;GHW8E1iBHKJG7E|%W!S=JBQYvW zYBIPN{KpWK6K@1qwVf`@x|z3#<+t9gO5{q%blH%Tsw=J?vJRbUG8(!8x$vC13+lg;9!mPPZ>R8k5b(nNqbykn!8w**_QSRfsT? zdqB!>VYk|ou^7h4M;4Cxia*1+pE@-3jAQ0&#CbfLxy9j{y$Sh-vv{U1k(czk4A||? zUN+rD^d33y20&`z&>%;`ujv8mP9>4^C{Z)H_0Lbqg;Lxf))GpGn}w@}8(><3g-)>f z|7y+N88h(-Xt`u2U_rUZj&U10olIA2Q1akQui7swM+*uw!Y(PC9_f4>{9da9+nM&M z(S@mFbhUW~Bh}%SO%1vUhCy%QxhR9rXD5o=nZB+0D6{7uLz$4UdR0`%kzt0HLMrq$ z(Q>32^QuTGF^w0cSiPX1$$S4l>}e8JOH|ZOLTAZ>vbmMBWhcsJ`4W0ca%pZvi`#_T z;9UPa_@yq@JXG^u7^8u|(IB}7nZbBdu2XeVr}xLe^0Tk0KSU!S<;$6u8_HuSUdDaP z4f+;a<_Y59Pl&6cSN7vP`O!uHt-vC~Qf#8f3hTZI12@%3pbwiD*#!RjS)7$cc>{C3 zXU^}G=Q}?fpo{D};Um*~Yp^*?wx>+M(BZ<^`gKBn#V2kN0XcWn-ne<-*1fvVg~fO| z#^1fs6620_o%=eIJ5UtbKqeGG!;B-vaqpjLDj2%*w0ti6vO^wezJ8wKkfq!R|M1Pa zklJI|Q-3k5y+}TnzdSsaqE27cklYcjRFiXuy0+263Q}%*Q)1nu4~0)^ zsZpxb{QBRPBUqP{KD#LJa6PoM&!6~K^aNuNu8aF?rV#Zyq_o|nc;2+UHpY1@_FI!B zW1)w57y&&gKfRJ1ZO{(ZyzqG49V#bAQYGIHFIMBm^z^2*{3#!T8>FoT z$7U62N##l!;`VDo6AgE;F9MxIr5SKR6Xep>7r8#I-Va<)eAx)s#W+95fl)qOqnKtQ zC;X;fOI4wLiUX8#&3V+~1JL5GMfq)0unV!%?J{O+5|Ez5TPPB<^}Kbz=9C4(DDn&bz9(rnp*S^~8ig7h$sl@r~hxHr)iX zaQEn|s~ml7A0_ZOR%paH%wtZYe>m!#X}pQXirk`TRP&9tZZ5%ZzUtp*n2J)R$htqh z1Y%CF8)rA}o3XPGP=@M#lZRRK)LLU9H@@c?nWzg_JGVOYnZJ8+IP21q+VbIYCi0)S zQY8io%CH57;HplJo_4ua8r88pR`vdRv*E080e$nZ*YL3r1&+Q8UgaN4Y+4t~udHxV zp6zN%{qV>${ULtiggV6Twqi{@9wr4d%g=?!%#YPAzeR7TM8%16b*RaTJU~%*&!Q+J z)FmRwPsCB@+PpK-r|})_V4-^3b2BE(&9<2+4h$OIJ3f{1*HHRPHt z*gCyQZA4`h1@TwE@%bt|tO*P@+zJfGRY!K;X&JKv$r05ymi7NO>kEDylMMVL)A`@uIZ!e1tWNrq?#&?bn&RrJD}+z* zsHkr@sX>m_?40<-oeDGdmrH@5x9u#M>6zLp1;)vmhmT}~c(1>b?&Od4=Wro$S_pTi zqYgaflN7fHmDG;-6OU=4dZ+u=oeqHnvR}xz==Xwu9b}uF<+qX3QpRwGU{dtO#!+Y% z)04JhIQt1jYIAD#Q$A09uSE;ip%q2Z!_`UEW%PXvh55qC5?fsa47UZf?(VZBazF#> zvyJ<7OIah^H?8Ywm{a9O9wY$+sEn1spQx+ zDf%%l+W750z5(_UV`ahpguMw^7$~Fb@@0m!tM;{xpOz@ahC@8 zBy_5h3ud&m_m#|k=xTm1>63ZqY)^q>#_@wB&2R>V>Wd`2yc(kl zr!#$5{a#&tFzq^fRTdef)j2k_MV)bFt6+0MMK}dkx*MkZlaEsAW~w0l`+%<;k0+JK zvc0yD9=W4}=;-Yg3wh=w^B+vqL|n`Ui$>zn%p^2jqI1jl==_AnqMA6JI6Nq9|GE{U zf9J=XyU>}hqn%zr#htRDGo>!oFMY~PW7wv1A?cKoy=NomqvcOTFgdsdooLl4d|R@{ zaGmg*Qm71Jt4!tZr|DhiZedK>B8M;Z-j_$!zo!<{z!lAs7xqtl+<^8tH|25LQ7eyY z9m11u?8#aq&9MCJ!KNpp9)>hM?1~g+76egYiVszKUTZzU3#aE%Ma$jS)!*^YGVFJc+SN`$i1GA>ysM%cgbh!S@Cl4WxqTA4mn z#DOfjR#%-N`{Bt|PXQ-X_Y9k(d4PR_|B{IALfI&Pn1x!I7_TBiA)T)Qow ziu>?}YOrH9#_P`oQy{)iX?h$-F!(>37!eqD2<5+NML3tvk^v8U3N8)1Lhibp_}jF> z1dM+Jh|VqwMMZ3dw|99ropTCHad^cNomKVp?VYpGQZJ3&3zy(zc#E=n{+QM~$k3l^ z7<(>+F$pIzvz9*LyUGi0g)cn@W7j*xR@k&e7n%--8Bb@t3TiD+I_LR;gi^K{26}!c_dTq17Y{5gUSTw_{1|V%8_Rws!vgM*rae$ZQ}X1T7h%f z-Prq*3w%wkGYYN~>SwdF6J<2dtf?&#|9dV$_B*pUC^6vix33lvW!{v5;H;VQ z3Jj(ONJz~I?0Y#8HzxDisIkz&w~wwg8rMyLyn*Gbp`<^iB|2`CLmJ8cSu;`1Q@t zZB|-Nrm0%oBJ}%yIavsC@oaXudtO>LIkOfm+`3pyMbf&(W+~h9bU}!!b~N zYt>s2BoVbrt~8%5h4$KSKG5;cgPPq1+5om~51m~Y_xJ5Q!+F2m6#>(wt%h(vbN1@k z)RPxN?nVe(P>S+=cDHc_7?zJ8d5Jwk?^1Ddu}Clfzz=W8{Jb(VgQ(PDoP%|&n>=Uz zkIhFj_wv#{+C)Ydl~mjYNTS$qtPz=iep;`6knjLzbvw7LH0V&?7`wz|v}P}U1|r<= zhpyMCJ?xGb;=jOn+4>N%x-@LN>@}UUy#tTxsmm8$oG0F?zaq;L|d$UZl zD-5VS%VOHKg==Y5!6j3-10gVv*~Vaiw`?N88!3-ZP$_(IG=I1r2ZCt7#SuMGw#TNP zOo2k7-y*Can|qWbY-BDcI8Bd-qpPurZfS&B!ss2|Es4NW`7Pnc z>n$UaPJjbQF4$e|_39~W!bgg{Lp~~5+8QMbXal-{Yu>wJyx|FxKXM?yVplMh_Fhmd@|P6Q?v*shYqH zsw;rWSj+t6s^*Yt_RnszeCwN;X9F$}rcB$u7f6Ue^kaG$kvx;LruIYWd5GUolu7g) zb2a|P^$F^URMw;=(fM`Ui?a1z+kV8!Ag2WNMcjM%ASgVflfi}5_L%Phgp)Ff2}IM( z_q}rE<-+Dxj1rBD zjn8dk@fIROQJFESTLu5&F3J>Lsjn(jx9zwhC{L?vh`Hx9!?03=xS(|1c2VB|r4sfe zXVSMQ3+%v3*j9u`z{9v^DeOv|F{(C@?xF?jtMOeBCkciHD8#PPWCP#$-U@_2;G-=Y$m+xsEtqf71-NyK7E$W08&iXd;um+OYzqCJgSNnZZ< zR|zX3BeWlu?I;{IYv%}51vJKbo}oID{%}yWIchcxaE5lCItVp9Hy`(zF5kp4-p(zx z*&s4Me>kBL*%>YE=c}MNIue}p^!yLmI6JlGtaN@*`dOL&_?D(mTQd?Rp3qEs&YphM zpV#YNb>&JQFO#qQ3rZ97><^~+#c}2uzPUZ6F z)@0w|^aU8tn=k}8ia$N`>+(6aZmE-`0%q4n`g&4j|6x1%RrXNE zdGw1#ntSs#lrpPKG}KEKs18&~_qTS!VKKb(E`e=?VYGoAR7t5Z#A3HYu>#ok7jVge zCzR2-k@9R{h~C!)HibQUb>z+siV2gm9zZSay-{JDXJhsx|W4b9JB#Rlwa=}s@3&i(=c)q{FL)HENYc{>kRBB(n)bn)|7& zpd?t;SR94$q`TunNH*Oq&#H3+-Rb$q%xX`?7Lmcu)bo&@zYw&cGp&9HW|jAYVIus& zJ+Tas)3SEw#VVqmZ`(oVrExhCNO2nCHiwNyjFiEht8>>2H$1ZN4)JR;_A;J`xsyZE zm|Y{?9_wdiv@y3dd-w09>)D(Lu`-C{)^QBmUuA_$PLf=^*_M% z4a0wX3XFG4U89|*1)d3Yzk6goyT$-z0vb(dB{4!5Mmy~MqUu#6(>g2K2pr@NHg~IU z%QJ@~XarOqO^A_izPm)HCb43eZ06_(T)j^XFh(yKE()gg!NOU2qJ;k%a z=c&9I8vssCMRpOCVNXY6jvYID{<@-^w)Ez=IZ!Li!$-!aWH(%6A*P(wnEwWYQ0|D< zXom48oQrA1H>?=!v9F-tT>}T`w*X7*$b>QR;B~SbY+m&Xd=%9P-nPShSX~3_^9Shp zBw%3{%{s$ae5VepR4`2Y3NFP1q=uL`XTTlR9%)y%$v}g}9>E0>D_o>1*d?4qP;aDq zKG;DL#(kjfD2dJ9ya0=~(6$|LI$KLTe^*IJ@nWMZ+7i`7Weo53GhC({} zt<74-*84x3nVQ*5qq!`nfYckRcb+&k5rIs;tvpwMT=l9VRb7Nt@*M|{i0LPug6nl7 zm}{Hb1DU==-#~1YO|V(hYl^Tol|JykTlBsYr-C@t^zC7|9t) zf8~%`1~BN}$it3|8@@t)WqGxi91xqmG!=dNYkRb{s7goj+S4;|XN5y(XyQwk<+Tmnn$VF|IAZVhwz?1=~?)#8wNWSd)j| zl6Tm6PM9&-#}QlYR>0}rrt!Rep=au|N{_<`p290)*UvMsPOZIC=EAJlU1(-{&G-rQH&wx*r&^LoWouR7p=pdE4EmuMHjC zr*>0sXP)`ja%Q>^52xQxfu*X(Xh0LRUS2;wnw*sdDjgpgg0gc?DIfN={dRcQEg2G{5w54p1;5XWzjs;N3z17C-~#F% zeeEXaN)rM!Q5rA~`mJRdZ*9Hspu7=PL&NVdoWM#U`CKVq(2p(BsfW~so#^s#H62A* z-eX@dJnRF{T^*%MYvfQnUy)_iQ8mn=l3hcp%AZ7gt27JEkpg$32%p*f84OJ>wf5L$1q(twq8n*NBCqcRZ^Ty1zT!x+5_&UV6 z<XS9m-k|JmrO7o
  • zE0*=ai>(*$x2marjqTJmjCo?UX$jNu*Lzq!-vzJ1m9D^NG2$Zc9NCW3ySh51X4kMj zEhdAM&!g$hZHpi)76VPAL3YL(u&`9AaayOUwr zJ9oj<*h>8SNuefgI+rtBAJC9#VWfShKkl<0nHR`-7w+pI+?nspZoa{O=EGYFhhy(I zOX5OCuvvYtS~SJ+<2RDb*^W=wC9nhofEJo#0&WbV+<7s9RGQHyTj$Up!iCmh)%qaO z1lX2%k`jsSPMeozLgFvb$Kb$0Kum(|UzB7h8RB*75Zo74F`OosWF85I**6 zxY^!_nwZa@je!}0bAb~uqwAOE`~wZn$rUoE?gFm4s`7KC$4h|!-m@S=%pxn# zJaw>xj?RgI^$6?GKb(PdLgfjt{m<_NBmdK9DH~hGc4|V3(uxwxWAy5x{8`xrk+FWt zUN;#6Cn3>T^L_id%fQxy)Su|=!*8ZS^wUxILgSg{{6G-!h>9Tx4Na|L;};Ev6|*i0 zjlJr7s~St7AEB>L(khJ zgH@&YlZSsMXf$O{9vWFdFzO(H%HGxbWc*tdn4$X69nMfkd=ZgCtOPp+sbOFl$Jo#p zchKT)4iZ{aXzb&JA=cAYC8Nb~nqZcD?%E1?dYy64YmJa&JuFJn5S*qTY zNTaSa4+yy?ipaKxeIpj>%p!+#uc5r{kCD##XDIvcI*y)8p|mHKf}SH`A+UekGyWNtD z*vDdRzRsKur)pf_q`geI+V-0TimM;W)$ATE!2iu3+?)dtQH4XQBJJg| z#|m}fSxV+qe)wWhNe-!G@5U_o?BnkS3C?}~^@y%AYH~4T54tM-&~KbTy(&1FRfvok zk?k9ZQ}a~PjCX38p-&1~&+A3?Y%`jj4jceTcwwc5GjMe@t-+zgDi1e) zB0R8V!R>YlwNbnLCw#5=_lJrMa=d0&Vyk9ym_3#+BBf&8ksXBce^eya@xl+$&)y?s z|G5NN{2TT6Kk0Zvwp31fHrEX{PbnGXWc(umI2>5LCO~{1Y!IIZWE;#KD}Dp+6lBFt z@wbBQ_o!DHHLRuI>hE>&B@_sP1fw#L}&ED_(`uSmT_f-xF z11Z~oUpk`>!$8xp(rgKR%Z~4q`k6uLQ{GojW@^0Nc#J9o^GFSZ$S%DD5~AoYT=#at zniT^M>JQ9u8b+OMAgRmO)^*A{dw8^_s}$qqkNh}-WlsYasPguuPLmMTzcqUWI&38+ zj^JamA%?Q7y~rA+Pdl2PgUF+RYrQ%^g47A8N9h|1BwCPb1;J$fgfUzZS?VBF@nuQ! z1J%Hin2sfl+~X2H=gg1k&cNYoa_OgdbUnSNwQ6FX{zSVy7x98=KN*#4{f+_7?KNiD z2^2vi+j;0}p&gCL`Sgl4b?+L`f@%4#q6SS+h!w+d z1B=Yq4C^4>dq^?EOBjcs!7mYw6k;O&x0cK357LrMa5XBW^xOXXVL+ZtRV-Hwi4x4g ztV@3o<`QNxl^y{LFqMe7NoyPS1ek0jv4R1yD0m4qDr2@b0*_LK`4C>u-&;N=bUT#= zdGrfS@}l$JQ4>D@eq>i}6929SkJdl;^Qs6Dwbz-q;h4vWxiNAfGozmG?+hM%Mf7$D z$#(W(tZ}J4;#2+R$LZ_g9u(yE) zo#$x$0$O%`h=>Me5(1Ej{4P%>pO0Kf_bR3AK1agPo3oT+AM6$;{4y*=s7FhryS6#E%giK@{kyi@by${#z zp||>5+rEkjZ#Vu+rXECiS5*YWlDt_R>+c9DQh-HBC03yRWoT!E)G1 zf*1F3K!xWWAo@Ruyd3u3ePj7xUarl`8;R(|<~9Ac_c)yeCkVAf2yA8(r4@5lOsAa#`h5wi)sqr3;7 zt5Eh9?ae}3jO^MWH9<_i8F2T^(fy9~W7Pqygg3hYyKE;l>v4VonwWVanau!(-R=0u z`EM3oH8=H}aB?1#us7lfcS0(k=goh@AsFG%LF!P>p0UBlt|hV%#I5^&pkr+=4s*`5 zd==Rofzb|^MrJAQJ%|PWZe`pic@#T3HZM(5w%5klg#>s6Jl6zV#m$A8bLAlge zn(_NSbIy64bAD%kZF}}S-)Had`}usI@B8_F-k&iUN9WOLsG3nNG*I?sgg3sIAb0_- zc7YbzV;%N&OE_2gB?Gd0jT*QW2&|=UkamA<;Q?n9O?JA2%2+xi^{m6%KaqyS8YY;# zO%nG+&W3}yJOeVt7Z8FY&9Ok96+TG;A6d|>_zIZ91j4i7?yx@SXU{WWB}3QlqP+#6?HNksC-_?ZHrWDxkLwoy|secY>7=I2&Ye?iqiz;QLMn z7`7|j`#L*}2Zig)DyS)1U+T${Osskp$Rv3@U$JtzV2xGSV~Vj@C?Np2DX7n3w9?e$ z&m8g+;aI$SIO3n#6jkm@XbN~GDXt{-QcAQzG{eGFDR&TL`ZYhPjRJ;aqxpJ3>JIhR zh^_z&#Za`6YtXz7_XAkqGGSXRM_6izvp!m@p8HO3p!j^B8#Stw6GGrVR9ZyuYmc#M zUR91k=5+nikhY~k0~3#YGkW4D=HVq-tejR~Ya+&bgs8)SJgZ6mz{kHD$Z<7xyJZ${ zrsb%U-?l@={J=ac{}?bNv+tC5!SM^M{KFrzw7Nsylw#%0c*Gyt;R@c5o;noc*w_GECWOB5hI~3b^*Hp6LAvD#OFlFIB+RQc9Di98=k<& z9jv~heH%5Ty$_LDPi{Bz4ovi!1pHy=uqHnML;F7-AE?^wDn;?VYit8G$mJk7TzX#m zaB!!%2`%I}uzyadLrQg!f#c8sC}?MS4yFt@oUe+6e+Q|+EoX_N4)GGv7h_Zr=|98G ze=|#5^m_`eJ*Waz<3k!Y^c_QyLqsT|!IrJM;JzS~BGysyArs5=h$AUETui+T?iapd z-$D6y9ds*cD$s*?m(x=f9IeN|y3*VJW3m=G+Dv3zR!o0Po@NzRs(**y?frIpnHS@% zE1{(g<;$Qr!a|jwlE9c#$HuhOjH^jz66+G|^52ai4Es}^Ft~H#ke;IrCpp>J5`pki zuefGewITWGdNlCU0rh4Dg_BW{K|_JSu*KzU?T!>(!Ecn8;BKL6y($Gw7tWJ|j2 zKPG_jo@xH5b%fQvAr?#Tj_Cz~yQiWY;n4?TBLyTPMIsdia$~WPS*Q<$GxGO8c&>K~ zuBm-d+alGyx12hi%MGegr&Zr9?$@y2mLec1G%k$Vop3O?d0g0`mDfbI-cDO#!yR}c zuH4cJE|*I$bE<4Auev&%8zx1@FNF<$@k=*5EA5>v`1|=R%j+sQ4`Q2|D&pa7JC91M za*S$^ckhMMKAkTf+omlAq**++q? z?>jQ=p62r&ynpCe7REpaJPd3O{LCDDjG+C-Q|TWb|@}N2PO8~(J8jD38O;0 zW`^C!1J=&aP4$!&mwUXavGMWoNk`2ua*>;D0iI7_NC7r zoCuo?86sD+nFr;l)kvWJj$NKl&lrncZfVogENu_}EyIvueZRn|N?7B_x+eiK9_M!k(__Dey$D(wX z#>nxqK(3561QiUPBWeCafa0mmQeJ!)NR}3?v7SuzTa?i4#LG7=5H_B?O(lKF(xx3i?S{24U zOZ2h&*QGhZK9e;TJ64~yVM5&}Ybe}_fhBvE{2D&TKEo~2N&J&ufxhliJ+m?5TskqczxM?Vqw25WNldvi1ZeM8EJ*A3SnA@UuGx(4uV|>y+ZAnfGQ#*-#FF+4q8P*h zLLIyxZ)z?ZCHw0UD+g}l1&h8y^= z2@BUMJ75pqx?G8EMLusg?RNIi4UrWgzV#B7h(+uR zZ?3Wtg!Y<42vV)%&AbE;-Y6U_3jyS@M2;M8D!LeT)Id3jEr_M5)tY@Soy4dM`R8@3 zkADVyYs>xh7xWs_B&u!d^ZSXexkmPDm)flu?}n5@&E0Lh$opGuY;VkbF*>BEVs-T1 zE#n{V^u~Ivy<3v9I!R49#=H+iUx^a^aHp-l8=aMtaoc(3OIOAi5=RXflNlk!F7%yx`-D9v@DdxBjfoNjRTH zE=pU)rIFgQO|r2RDSpQq-v$t6I>=u%Bgpt)<+S}m@GU_R3eZt9YT4w#i9`pF%I)Q? zv~TS)p!d_L+2^yDwCir|=5y&IV2Jv*LSg2ov@3OEq7sVxh!JRt`hcqh4X%QZ=&y*I zkqo`0?+_osEDmhf3l;0fmlqeg{Sm(a))@0Oh!F8J$ie~$A|xg5h!+Hr{Enl-_LAm# zJ`dPlMfy08>Ghq_uk5@B`r@ycw)SQbj(W6pb%x7^ zBG*5Kx;zVCU74X5Tbn#CyOtX~JCs)Q*sS+(c_OW|v1e-AQrH2LaK}f%?pN|cSASv* y%TFyB4RP=}J85^mRXl6UezwG^zCKd7Vxtc5s4EP41/lgPSIDhY/PS3s20ndy5M9d8upFBwbpi5AG5tu/XnwQSRginuIaQc5zMZNAiCXie3dVqWTKBt6vdxwytl59phJMJsKLdBN5NAAgCyP8Kwb4U2L7vlZI4I5GUHQRz8i+WQktKNyTCudaRUZowstaFIU1THDJNhrKMbvVuTzTRr7pGMTYE8xAlpvQvErFlKfXB7CD/DZN4qa5se0F5ZoHC73FGN6m83gTAp+KnPL1Cai75oPkSRXRbE8H7CbzNKGXl0Wp3ixOBrYKtHPfhyNnqvjOcsi4DJFE/ULLB6o6L+2J7hcV2SRier1Eo2lvO9wTeLNkq4S2bH+Yso98rlITkiSTJLU1oVoxXz8/lNGWSatvh7QjlSxxp09SGWcUPPyPvEGcM744+pV1hx3US0xVm2Z53kfrnSLS3By7tioJljUhHKSCSChRXcx1A5AcSx3ZMnbeBqWd3BNX2egDVNTDEETdZ2aQZW9KYpii5P0hvCjss0LB0fPGOsG+140fRZeqKVsrv65vefJQT5Axl7L3wLlyQ0hQr2Qcibrvog9NI9QgTlOckLIWyi7jgP5ixveQMbRjlosPtf6J0rShvUFsCIJ5aYyynmyxUIuUrURZjiT50OzOb4QQx8kOf/xzSvLNIs7uSNjIxPzOz0ylzxqNsNoSdWW/AzmwwHmn+EHb2Jkgb0dKCs0kL6UpAeNTqSpYUhYIQFW0ajB34qJNm66RJWg+MWScwdtxLnsFln7zJoQ+U8BmBJeMizwr0IaX6yF4NtqvLdlIAFWu+lAbYDRN2QNOIi8YDzgh/Cpz9VGmGN/P/q9KoS6n9b+AOpkT2ELuNBrj13Yck/fmNhnMnfs/caMgBrutoYKqtdn3fYbdsO4IefLRa1C8X3SrYbKpqHV7HNuF11Np5Fr5wkHDTcHuvMzoZ3hG5ls4u9BqrWekfDUd0+kQ9erRB8ievyeZcq2Fy3kuanGvAe7/jqz3XYANm/oisDdkGci1gooTEqbAkDomIJG4EYCREyXt5YkWiqLDmNiprbAGrsHjGrYCKce+CnvJXLjTzV07LMgJ6SF7ZnoH57+kbxBzA2cthPjMwv+VeHaVRhibASwTIi4wfxeLoTxyR3OBCC30r/2CdSA/3Hq4lfrk8QQucPNCcSGgNrj41OiwoY3TVQiYTq40IztfiZle7WLyJmdKnJxLiaYQYWqAc59XR0VXpLE/mB0BfFqAZnEG/xZPBHgj2DYLXGd3tL5nFv8PNoo3Koi2f2O6B12Cm0VrNULfbmUmrC3qgNTBoDXlsQZfiHd6V2vMDfl/fm1Z29ALcguEyHiqv8c6aWqCSNHKX3VMbnfJhryq7Ub0nr6U3VeTc+0aDI4D2tQ5rEffnhiqcFvgDM5WxyEh8Nfo+dhz+eEZv5lBilCT4ulD3QKznjUisWTwSC1cdfr8y20No7TeYfcEYrKWE5Zm90z1fBjmic4yycHnJ1A+zh3JVEnSEPRQwk0E1oq9MnuiMZyMyaaaYHunmq/BVhsHO15SRp33LCRFq3yZ0E5nnPlIac00A1me0vqBMSSieNp+uM/IDMe7ci4cfRDsCXTkc3yxKdFuSvrCHXBgwc2Fzfjd8nFWw/SGjMm1/5fS0zfRsNh6pZv5rfj8vWZ3/IQ/u9ila0bsb056/4N2KmuKvW5IQetWFX9ljuePpgpk0m3+p68KVztPjb69Bp/mWbig61Rw1OtOYpLseeWziu8pDhKcFaRzcHE9jju8W7U3KqzehTdB/XQV++uaq1Ik+jNTX32OrZp3UlhdSbh+kmumtEKWcjsuxzvF2y7Y1Gy8RAgepYtHr3IFegDvREtygaxX8gOW16juD11LIDi++jMS2YLOQpK12C5hK79huDwCbuYMLA9h3RsX3rReNqAG2r9MAHJOGoapIoLlzflvVUsdIeMnyKWjudM0F1/hMocaD5jJE4wExQWIZfFjOxHgV3PXF7HOfxBwvR+2ynAYty2nQmbaO8ey55aW+vuWFQYPtruWlHmxM5DQm6q+8FJpb5Qtbthy1JD2zbEG1uvReX+q02OYgwfAI8a7BQZvN9l7EHUA9yQibnyR1tbJmPAMCfygrczpE362VNG01PY1ynoL97gyZH2mO5S6bhmng39ld6jx6vXlL3jz874my++EffMD7/wA= +7Vxbc5s4FP41fqwHENfHxkm7nWky6XpnNnnqyKBgWow8ICf2/vqVQGCEZAfHYFLHyUOQEAK+79x0dMgITBbrrylczm9xgOKRoQXrEbgeGYZuWRr9w3o2RY+hGbwnTKOAj9p2TKP/EO8sh62iAGXCQIJxTKKl2OnjJEE+EfpgmuIXcdgTjsW7LmGIpI6pD2O5998oIPOi1zWcbf9fKArn5Z112yvOzKD/O0zxKuH3GxngKf8pTi9gORd/0WwOA/xS6wI3IzBJMSbF0WI9QTEDt4StuO7LjrPVc6coIW0uMIsLnmG8QuUT589FNiUWL/OIoOkS+qz9Qgkfgas5WcS0pdPDjKT4d4US63mK4niCY5zm15fvT/txQjjVuknbAczmKBCmqV2m5T/0jPxK/C2fUUrQutbFX/ErwgtE0g0dwuXPtPklL1sydbOU0nmdSZt3Qi5BYTXZFkV6wIFUgwo+BqhaS0xNvQtMvT5AbYBXBznBCXoVT/Oa/R6JJz9rmvyNOL66Z0kAA2Cr8DWPx9eS0EQBNYm8iVMyxyFOYHyz7b3K7VwOjiYijdYReagdP7IhY4u1EvpcD2LzkU+QEZiSz8x6b+HP+75E7LHzMSgJyhF+DLMs8otOPoTd8BciZMNZhSuCadf28b9jvNxB/k4KM7xK/dIxlN4IpiHiwwDHjgG2l+gUxZBEz6KPOYY0+yjS9LakDUzMbiu2T5ryxvVaaG14681U//r5w4ofbGNlZcDV73483DnPn0x7KP6dPpRW+xBKqxtDkeb2obQfhDRzKNK8o0nz8YJBuFPrCpZKChkh5dJAYmzLR500XSSN07plTDuAsd0m9yguu+ONX3qPI3rbKoSyNU+8pBAePqrBdnXbVgJQxrCnkgC9ocI0OFQ5uHuURvQtUPqq0PSv5n+q0JQ3asbdPQhRuSg420WMZZkCmGVepL5I1BVrRK8DG1269PNFl86mFlVhDa7L8HayRNRBL+GmZPbea3TStyGyNJFdYDe8WWEdJUN0+EQdWrReEojvSefKjFalcvYpVc6S4L1ZU29PJViCmb4PUSHbQE4BJoyjMGGaRFFikcQVQyfyYfyZn1hEQZBrs4rKGluGlms8oVqA2XWfvI5yjRaQE7imwo0YHWQadVvC/FvyATE3gHM6zB0J8wm16jAJUjgy7JiBPEvpUciO/kZBlElcCKFvZR+0A+mh1sPS2C/tj+EMxfc4izi0ElffGwNmmBC8UJBJmLdhwfmSPexiHbJ9szF+eop8NA4ggTOYoaw6OsQrtbdkricmmAGQgzPgKiwZ6IBgVyJ4meL15pxZ/OmvZioq8zZ/Y8WG3cG8eo5AazVDXW8dmVbL6IBWeVfGp7EFnrMN1wu1xwf8rrg2rfToBNwa/WU8yrzGJ22sGVVPI3fZPrXRKh/2zrIbVVlDLb1Zxs4dLzQoAnBTG7BkcX8micJhgb8hpzJmaRRelL6LFYc7nNLLOZQQxjG6OOoOiLXtAYmVi1JCZqr93xdmOwit3QazJ4zBFDVce9ZON9QNUkSnCKb+/Jyp72cNZZVJ0AHWUIacDKoRfWHyQGPsDMiknGJ6xKt/mK2SFHa6xCR62ihOsFB7EuNVIJ/7inFIJcHQbuHyjDIlPnvbbLxMo2dIqHHPX74X6fBE4TBdudjRUiR9QQe5MEPOhU3p09DrtJztLynmafsLp4ctph1nOFLl/Nf0ZlqwOv3BD643CVzg6ytZn+/QeoHl7n9eojjCF1l4yxrLGk4W5KTZ9K4uCxc6D4+/7Qad8i5dX3SWc9ToTMIoWXfIYxPfReZDNM5Jo+BmaBxSfF/gRqa82gltgv52EXh156qQiS6U1BX3sctmnVTFhpTVBalyesuHCaXjfLRzuNWyrjnDJUJAL1UsYtG8IRbgjoQEt9G2pL7X8tryowXhQ4bBymvB2ZeR6BpoFpKoarcMWehN3eoAYDl3cGYAu+ag+H70opHyrKmJNFTe+ARVJEBeOX+saqldJJyyfArIK13Z4UqfKdR4EEwGa9xDwkgsgg/NHElbwW03Zvd+LrazHLWdO/UU7tRr6U5bxrPHlpe64pIXeA2225aX2qAxkdmYqLvyUlMhO70Ea8PHY22F5Ugp8ICYBAPNT2baSkHT3xqe25sUtIgOlZUeqpqTRrlJzv5Oht70xeBp1Ln5ubiEf2t1Fnm0+9PmvV/PvuvIVPK9CmnZ7Y4NKfRX1ZD3FZqqUT/uK9Rz+N7/9Q/EG5r+qoUoVrGdm2zbNseix9XLGvVDdd3T9eZUXmOqHeq+u4hMfSdd13TlM2+ltpiyW2PS4r+EtPcUY02zRW8BNOcVyVGWLpb6weodC9mrlAQA3tzOxxqbWqM5W3vhrAcYe42vHLYO5dYkYylFl3+IX7PkbYFH+uS0Z/r59jv98y24z/d8mpt6Gz5q8u2WsYpQKo9R9TTE/I/NSx+0a3SkY3as8XD7gGqxkev0MvpG50PvUdsOx9LtNOg+ZZ2emu69xXvny3lfCu2Ki6Iei7poc/tv+gqnsf1niODmfw== \ No newline at end of file From e1bdc6f83099211e1749edfcfc367771125b1537 Mon Sep 17 00:00:00 2001 From: Chris Penner Date: Tue, 12 Mar 2019 10:06:40 +0100 Subject: [PATCH 05/23] Add linting white-list for duplicate ids (#655) --- .linting/duplicate-ids-whitelist.txt | 651 +++++++++++++++++++++++++++ 1 file changed, 651 insertions(+) create mode 100644 .linting/duplicate-ids-whitelist.txt diff --git a/.linting/duplicate-ids-whitelist.txt b/.linting/duplicate-ids-whitelist.txt new file mode 100644 index 00000000000..821e0e9a636 --- /dev/null +++ b/.linting/duplicate-ids-whitelist.txt @@ -0,0 +1,651 @@ +<| 2 +A 2 +AWSOpts 3 +Access 3 +AccessToken 3 +Activated 2 +ActivationEmail 2 +ActivationEmailTemplate 2 +Active 2 +AddBot 2 +Amazon 3 +AppT 2 +Asset 2 +Auth 2 +Bot 2 +Brig 3 +Buckets 2 +Cannon 5 +CargoHold 2 +CassandraSettings 4 +Client 2 +ClientDataError 2 +ClientEvent 2 +ClientId 2 +Clients 2 +Clock 2 +Code 2 +Command 2 +CompletePasswordReset 2 +Config 3 +ConnectionEvent 2 +ConvCreate 2 +ConvDelete 2 +ConvEvent 2 +Conversation 2 +Counter 2 +Create 2 +Credentials 3 +DeleteService 2 +DeleteUser 2 +Deleted 2 +DeliveryFailure 2 +EdMemberUpdate 2 +EmailUpdate 2 +EndpointDisabled 2 +Env 19 +Error 6 +ErrorResponse 2 +Event 5 +EventData 2 +EventType 4 +Failure 2 +Galley 3 +Gauge 2 +GeneralError 3 +Gundeck 3 +Handler 2 +HighPriority 2 +IndexError 2 +IntegrationConfig 4 +JSON 6 +Key 2 +Label 2 +LoginFailed 2 +LowPriority 2 +MatchFailure 3 +MemberJoin 2 +MemberLeave 2 +MemberUpdate 2 +Message 3 +MessageId 2 +MessageResponse 2 +MigratorSettings 2 +Name 2 +NewOtrMessage 2 +NewTeamMember 2 +Notification 2 +Octet 2 +Opts 10 +P 2 +ParseError 4 +Password 2 +PasswordChange 2 +PasswordReset 3 +PasswordResetEmail 2 +PasswordResetEmailTemplate 2 +Path 2 +PayloadTooLarge 2 +Priority 2 +PropertyEvent 2 +Provider 2 +Push 2 +Queue 2 +QueueUrl 2 +Recipient 2 +ResponseLBS 3 +ResultPage 2 +S 3 +Scope 2 +Section 2 +Server 2 +Service 2 +ServiceConfigFile 3 +Settings 5 +Suspended 2 +TTL 3 +TestSetup 3 +TestSignature 5 +Timeout 2 +Token 2 +Transport 2 +Type 2 +U 2 +User 2 +UserEvent 2 +UserId 2 +Writetime 2 +_applog 4 +_awsEnv 4 +_awsQueueName 2 +_cHosts 3 +_cKeyspace 4 +_cPort 4 +_clock 2 +_cstate 2 +_eventQueue 2 +_extGetManager 2 +_httpManager 2 +_key 2 +_logger 3 +_manager 3 +_metrics 2 +_monitor 3 +_optAws 2 +_optCassandra 2 +_optDiscoUrl 2 +_optGundeck 2 +_optLogLevel 2 +_optLogNetStrings 2 +_optSettings 3 +_options 3 +_pushNativePriority 2 +_pushTransient 2 +_queue 2 +_recipientClients 2 +_reqId 3 +_requestId 2 +_setCasBrig 2 +_setHttpPoolSize 2 +_settings 3 +_time 2 +_user 2 +_userId 3 +accept 2 +access 2 +accessDenied 2 +accessToken 4 +acmName 2 +acmTo 2 +activate 6 +activateKey 3 +activationEmail 2 +activationEmailBodyHtml 2 +activationEmailBodyText 2 +activationEmailSender 2 +activationEmailSenderName 2 +activationEmailSubject 2 +activationEmailUpdate 2 +activationEmailUrl 2 +add 4 +addBot 4 +addBotMember 2 +addClient 5 +addMembers 4 +addService 3 +addTeamMember 5 +addTeamMemberInternal 2 +appName 2 +assert 2 +assertQueue 2 +assertTrue 2 +assetSize 2 +autoConnect 2 +await 2 +awsEnv 2 +beginPasswordReset 3 +bindUser 2 +blockConv 2 +body 2 +brig 7 +buckets 2 +bulkPush 2 +bytes 2 +call 5 +canBeDeleted 2 +canRetry 3 +cannon 5 +cannon2 2 +cargohold 4 +cassandra 2 +cassandraSettingsParser 4 +changeAccountStatus 2 +changeEmail 2 +changeHandle 3 +changeLocale 3 +changePassword 3 +changePhone 2 +changeTeamStatus 2 +check 4 +checkHandles 4 +claimPrekey 2 +clearProperties 3 +cliOptsParser 2 +client 3 +clientClass 2 +clientError 3 +clientId 3 +clientType 2 +clients 3 +close 2 +closeEnv 2 +code 3 +codeDelete 2 +codeInsert 2 +codeKey 2 +codeScope 2 +codeSelect 3 +codeTTL 2 +codeValue 2 +compile 2 +completePasswordReset 5 +connect 2 +connectUsers 3 +connection 3 +connectionUpdate 2 +contains 3 +conversation 3 +conversationCode 2 +conversations 2 +cookie 2 +cookieList 2 +cookieType 2 +cpNewPassword 2 +cpOldPassword 2 +cpwrCode 2 +cpwrPassword 2 +create 4 +createConnectConversation 2 +createConnection 2 +createConv 2 +createEnv 3 +createManagedConv 2 +createOne2OneConversation 2 +createRandomPhoneUser 2 +createResumable 2 +createSelfConversation 2 +createTeam 4 +createTeamConv 2 +createTeamMember 2 +createUser 7 +createUserWithTeam 2 +createUser_ 2 +decode 3 +decodeBase64 2 +decodeBody 5 +decodeBody' 3 +defCookieLabel 2 +defPassword 3 +delete 6 +deleteAccount 3 +deleteAll 2 +deleteBot 2 +deleteClient 3 +deleteCode 2 +deleteEndpoint 2 +deleteInvitation 2 +deleteKey 2 +deleteMessage 2 +deletePrefix 2 +deleteProperty 4 +deleteScimToken 2 +deleteService 4 +deleteTeam 6 +deleteTeamConv 2 +deleteTeamMember 2 +deleteToken 4 +deleteUser 7 +deleteUserNoVerify 2 +deliver 2 +destroyEnv 2 +dict 2 +discoUrl 2 +docs 3 +download 2 +downloadAsset 2 +ec2InternalHostname 2 +ec2Region 2 +empty 5 +enqueue 4 +env 2 +euEmail 2 +event 3 +eventType 3 +exec 2 +execute 4 +fetchMessage 2 +field 3 +fieldParsers 2 +fromBody 3 +galley 5 +gcmPriority 2 +genAlphaNum 2 +genRecipient 2 +generate 2 +get 2 +getActivationCode 4 +getAsset 2 +getClient 2 +getClients 2 +getConnection 3 +getContactList 2 +getConv 3 +getConversation 2 +getCookie 2 +getInvitation 2 +getInvitationCode 2 +getManager 2 +getPrekey 2 +getProperty 2 +getProviderProfile 2 +getResumable 2 +getRichInfo 2 +getSelf 2 +getSelfProfile 3 +getService 2 +getServiceProfile 2 +getTeam 4 +getTeamMember 4 +getTeamMembers 4 +getTeams 2 +getTime 2 +getUser 6 +getUsers 2 +gundeck 3 +handlers 2 +head 2 +header 3 +host 4 +ifNothing 3 +index 2 +initCassandra 3 +initHttpManager 4 +insert 5 +insertAccount 2 +insertCode 2 +insertKey 2 +insertPrefix 2 +insertService 2 +insertUser 2 +invalidCode 2 +invalidRange 2 +invalidUser 2 +isConvDeleted 2 +isMember 3 +isSearchable 2 +isTeamOwner 2 +journalEvent 3 +json 2 +key 2 +keyDelete 3 +keyInsert 3 +keySelect 3 +label 2 +labels 2 +list 2 +listAll 2 +listConnections 3 +listCookies 4 +listServiceProfiles 2 +listServices 2 +listTokens 2 +listUsers 2 +listen 4 +location 4 +logError 2 +logNetStrings 2 +login 6 +logout 2 +lookup 3 +lookupAccount 2 +lookupActivationCode 2 +lookupClients 3 +lookupCode 3 +lookupConnections 2 +lookupCookie 2 +lookupKey 2 +lookupLoginCode 2 +lookupPassword 2 +lookupPasswordResetCode 2 +lookupReqId 4 +lookupService 2 +mFailure 2 +main 41 +manager 4 +match 2 +maxAttempts 3 +member 3 +memberEvent 2 +memberUpdate 3 +members 3 +message 2 +method 2 +migration 72 +mkActivationKey 2 +mkAddress 2 +mkBot 2 +mkEndpoint 2 +mkEnv 11 +mkKey 3 +mkLogger 5 +mkPasswordResetKey 2 +monitor 2 +monitoring 4 +msgFrom 2 +msgText 2 +msgTo 2 +name 2 +new 2 +newAccessToken 2 +newAccount 2 +newBotToken 2 +newClient 3 +newClientId 2 +newEnv 4 +newOtrMessage 2 +newPush 2 +newTeam 3 +newTeamMember 2 +nginz 2 +noOtherOwner 2 +notConnected 2 +notFound 2 +now 2 +octet 3 +onError 2 +onEvent 3 +onboarding 2 +optInfo 2 +options 4 +opts 2 +optsParser 2 +otrRecipients 2 +parse 2 +parseDeleteMessage 2 +parseEventData 2 +parseOptions 3 +parseOpts 2 +parseResponse 3 +parser 2 +passwordResetEmail 2 +passwordResetEmailBodyHtml 2 +passwordResetEmailBodyText 2 +passwordResetEmailSender 2 +passwordResetEmailSenderName 2 +passwordResetEmailSubject 2 +passwordResetEmailUrl 2 +path 3 +paths 2 +ping 2 +port 2 +postBotMessage 2 +postOtrMessage 3 +postProtoOtrBroadcast 2 +postProtoOtrMessage 2 +postUser 2 +provider 2 +pubClient 2 +publish 2 +purgeQueue 2 +push 6 +push1 2 +pushToken 2 +put 2 +putConnection 2 +pwrCode 2 +pwrTo 2 +quoted 2 +randomBytes 3 +randomConnId 2 +randomEmail 3 +randomPhone 2 +randomUser 4 +randomUser' 2 +rangeChecked 2 +reAuthUser 2 +readBody 2 +readFile 2 +receive 2 +recipient 2 +refreshIndex 2 +register 2 +registerUser 3 +reindex 2 +remove 2 +removeBot 3 +removeClient 2 +removeEmail 2 +removeMember 4 +removePhone 2 +removeUser 3 +render 2 +renderActivationMail 2 +renderActivationUrl 2 +renderPwResetMail 2 +renderPwResetUrl 2 +renderText 2 +renewToken 2 +reqId 2 +reqIdMsg 3 +requestId 2 +resultHasMore 2 +retryWhileN 4 +revokeIdentity 2 +rmClient 6 +rmUser 2 +routes 5 +rtcConfiguration 2 +rtcIceServer 2 +run 4 +runAppResourceT 2 +runAppT 2 +runCannon 2 +runCommand 4 +runGundeck 2 +runHandler 2 +runServer 4 +runTests 4 +schemaVersion 5 +search 2 +secret 2 +selectClients 2 +selfConv 2 +send 3 +sendActivationCode 3 +sendActivationMail 2 +sendCall 2 +sendCatch 3 +sendLoginCode 4 +sendMail 3 +sendMessage 3 +sendMessages 2 +sendPasswordResetMail 2 +serialise 2 +serialiseOkProp 2 +serverHost 2 +serverPort 2 +setProperty 3 +setStatus 2 +settingsParser 3 +signature 2 +signedURL 2 +singleton 2 +sitemap 6 +sockSv 2 +sockSvTiny 2 +sockSvTinyNetstr 2 +someLastPrekeys 2 +spec 12 +ssoLogin 4 +start 2 +status 2 +suspendTeam 2 +svTiny 2 +svTinyNetstr 2 +svlogd 2 +tagged 2 +tdStatus 2 +team 3 +teamConversation 2 +teamDelete 2 +teamMember 2 +terminate 2 +test 7 +testCreateTeam 2 +testCreateUser 2 +testDeleteTeam 2 +tests 34 +timeout 2 +tiny 2 +tinyNetstr 2 +toCode 2 +toJson 3 +toText 3 +token 2 +tokenResponse 2 +tooManyMembers 2 +tooManyTeamMembers 2 +tryMatch 2 +ttl 2 +unAmazon 3 +unblockConv 2 +unsuspendTeam 2 +unwrap 2 +updateAccountPassword 2 +updateAccountProfile 2 +updateClient 5 +updateClientLabel 2 +updateConnection 4 +updateConversation 2 +updateConversationAccess 3 +updateConversationMessageTimer 2 +updateConversationReceiptMode 2 +updateEndpoint 3 +updateManagedBy 2 +updateMember 2 +updatePermissions 2 +updatePhone 2 +updateRichInfo 2 +updateSSOId 2 +updateSearchableStatus 2 +updateService 3 +updateServiceConn 3 +updateServiceTags 2 +updateServiceWhitelist 2 +updateTeam 2 +updateTeamMember 2 +updateTeamStatus 3 +updateUser 4 +upload 2 +url 3 +urlPort 2 +user 4 +userClients 2 +userId 3 +userName 2 +userUpdate 2 +validate 2 +values 2 +verify 3 +verifyDeleteUser 2 +version 2 +wait 2 +whitelistService 2 +wsAssertMemberJoin 2 +wsAssertMemberLeave 2 +x1 3 +x3 6 +zAuthAccess 2 +zConn 5 +zUser 6 +zauth 3 \ No newline at end of file From 56e4e0a891da3650f5112d4ed23ca0fbcee908cd Mon Sep 17 00:00:00 2001 From: fisx Date: Wed, 13 Mar 2019 11:19:18 +0100 Subject: [PATCH 06/23] logging vs. newlines (#642) * Filter newlines in log output. * Identify all(?) places where `new` should be replaced by `mkLogger`. --- .linting/duplicate-ids-whitelist.txt | 4 +-- libs/extended/package.yaml | 5 ++-- libs/extended/src/System/Logger/Extended.hs | 25 +++++++++++++++++++ libs/ropes/test/integration-aws-auth/Main.hs | 2 +- libs/ropes/test/integration-aws/Main.hs | 2 +- .../integration-aws/Tests/Ropes/Aws/Ses.hs | 2 +- services/brig/index/src/Main.hs | 2 +- services/brig/package.yaml | 2 +- services/brig/schema/src/Main.hs | 8 +++--- services/brig/src/Brig/App.hs | 10 ++------ services/brig/test/integration/Main.hs | 2 +- services/cannon/package.yaml | 3 ++- services/cannon/src/Cannon/API.hs | 20 ++++++--------- services/cargohold/package.yaml | 3 ++- services/cargohold/src/CargoHold/App.hs | 5 ++-- services/galley/journaler/src/Main.hs | 4 +-- services/galley/package.yaml | 3 ++- services/galley/schema/src/Main.hs | 8 +++--- services/galley/src/Galley/App.hs | 10 ++------ services/galley/test/integration/API/SQS.hs | 2 +- services/gundeck/package.yaml | 3 ++- services/gundeck/schema/src/Main.hs | 8 +++--- services/gundeck/src/Gundeck/Env.hs | 17 ++++--------- services/proxy/package.yaml | 3 ++- services/proxy/src/Proxy/Env.hs | 8 +++--- services/spar/package.yaml | 3 +++ services/spar/schema/src/Main.hs | 8 +++--- services/spar/src/Spar/Run.hs | 15 ++--------- services/spar/test-integration/Spec.hs | 4 ++- .../spar/test-integration/Test/LoggingSpec.hs | 20 +++++++++++++++ .../spar/test-integration/Test/MetricsSpec.hs | 5 ---- services/spar/test-integration/Util/Core.hs | 4 ++- stack.yaml | 2 ++ tools/api-simulations/loadtest/src/Main.hs | 2 +- tools/api-simulations/smoketest/src/Main.hs | 2 +- tools/db/auto-whitelist/src/Main.hs | 2 +- tools/db/service-backfill/src/Main.hs | 2 +- 37 files changed, 127 insertions(+), 103 deletions(-) create mode 100644 libs/extended/src/System/Logger/Extended.hs create mode 100644 services/spar/test-integration/Test/LoggingSpec.hs diff --git a/.linting/duplicate-ids-whitelist.txt b/.linting/duplicate-ids-whitelist.txt index 821e0e9a636..4586c0a49f0 100644 --- a/.linting/duplicate-ids-whitelist.txt +++ b/.linting/duplicate-ids-whitelist.txt @@ -297,6 +297,7 @@ ec2InternalHostname 2 ec2Region 2 empty 5 enqueue 4 +ensureReAuthorised 2 env 2 euEmail 2 event 3 @@ -417,7 +418,6 @@ mkBot 2 mkEndpoint 2 mkEnv 11 mkKey 3 -mkLogger 5 mkPasswordResetKey 2 monitor 2 monitoring 4 @@ -565,7 +565,7 @@ sockSv 2 sockSvTiny 2 sockSvTinyNetstr 2 someLastPrekeys 2 -spec 12 +spec 13 ssoLogin 4 start 2 status 2 diff --git a/libs/extended/package.yaml b/libs/extended/package.yaml index bb2cfc13cf6..deee7361404 100644 --- a/libs/extended/package.yaml +++ b/libs/extended/package.yaml @@ -1,4 +1,4 @@ -defaults: +defaults: local: ../../package-defaults.yaml name: extended version: '0.1.0' @@ -18,9 +18,8 @@ dependencies: - extra - imports - optparse-applicative +- tinylog - unliftio library: source-dirs: src - exposed-modules: - - Options.Applicative.Extended stability: experimental diff --git a/libs/extended/src/System/Logger/Extended.hs b/libs/extended/src/System/Logger/Extended.hs new file mode 100644 index 00000000000..bd24ff6801e --- /dev/null +++ b/libs/extended/src/System/Logger/Extended.hs @@ -0,0 +1,25 @@ +-- | Tinylog convenience things. +module System.Logger.Extended + ( mkLogger + , mkLogger' + ) where + +import Imports + +import qualified System.Logger as Log + +mkLogger :: Log.Level -> Bool -> IO Log.Logger +mkLogger lvl netstr = Log.new' + . Log.setOutput Log.StdOut + . Log.setFormat Nothing + $ Log.simpleSettings (Just lvl) (Just netstr) + +-- | Work where there are no options; Use Log.new which reads in LOG_* env variables. +-- +-- TODO: DEPRECATED! Use 'mkLogger' instead and get all settings from config files, not from +-- environment! +mkLogger' :: IO Log.Logger +mkLogger' = Log.new + . Log.setOutput Log.StdOut + . Log.setFormat Nothing + $ Log.defSettings diff --git a/libs/ropes/test/integration-aws-auth/Main.hs b/libs/ropes/test/integration-aws-auth/Main.hs index 5cb8a7a096f..ead5dc6dce3 100644 --- a/libs/ropes/test/integration-aws-auth/Main.hs +++ b/libs/ropes/test/integration-aws-auth/Main.hs @@ -10,7 +10,7 @@ import qualified System.Logger as Logger main :: IO () main = do hSetBuffering stdout NoBuffering - l <- Logger.new Logger.defSettings + l <- Logger.new Logger.defSettings -- TODO: use mkLogger'? m <- newManager defaultManagerSettings e <- newEnv l m Nothing forever $ do diff --git a/libs/ropes/test/integration-aws/Main.hs b/libs/ropes/test/integration-aws/Main.hs index bcb3a200f67..91d4285064e 100644 --- a/libs/ropes/test/integration-aws/Main.hs +++ b/libs/ropes/test/integration-aws/Main.hs @@ -12,7 +12,7 @@ import qualified Tests.Ropes.Aws.Ses as SES main :: IO () main = do - l <- Logger.new Logger.defSettings + l <- Logger.new Logger.defSettings -- TODO: use mkLogger'? k <- pack <$> getEnv "AWS_ACCESS_KEY" s <- pack <$> getEnv "AWS_SECRET_KEY" m <- newManager tlsManagerSettings diff --git a/libs/ropes/test/integration-aws/Tests/Ropes/Aws/Ses.hs b/libs/ropes/test/integration-aws/Tests/Ropes/Aws/Ses.hs index e94a8609908..6b7fd0d1e84 100644 --- a/libs/ropes/test/integration-aws/Tests/Ropes/Aws/Ses.hs +++ b/libs/ropes/test/integration-aws/Tests/Ropes/Aws/Ses.hs @@ -37,7 +37,7 @@ sendRawMailSuccess e = do sendMailFailure :: Env -> IO () sendMailFailure e = do - l <- Logger.new Logger.defSettings + l <- Logger.new Logger.defSettings -- TODO: use mkLogger'? x <- newEnv l (getManager e) $ Just (AccessKeyId "abc", SecretAccessKey "eh?") r <- runExceptT . trySes $ sendRequest x sesCfg =<< sendRawEmail testMimeMail case r of diff --git a/services/brig/index/src/Main.hs b/services/brig/index/src/Main.hs index 1d1e301e7db..eb2eb20f6a4 100644 --- a/services/brig/index/src/Main.hs +++ b/services/brig/index/src/Main.hs @@ -20,7 +20,7 @@ main = do <> fullDesc initLogger - = Log.new + = Log.new -- TODO: use mkLogger'? . Log.setOutput Log.StdOut . Log.setFormat Nothing . Log.setBufSize 0 diff --git a/services/brig/package.yaml b/services/brig/package.yaml index 2dd033dbfd3..aa87314a6d5 100644 --- a/services/brig/package.yaml +++ b/services/brig/package.yaml @@ -1,4 +1,4 @@ -defaults: +defaults: local: ../../package-defaults.yaml name: brig version: '1.35.0' diff --git a/services/brig/schema/src/Main.hs b/services/brig/schema/src/Main.hs index a0ecff49a9f..80e589a0a7f 100644 --- a/services/brig/schema/src/Main.hs +++ b/services/brig/schema/src/Main.hs @@ -3,9 +3,11 @@ module Main where import Imports import Cassandra.Schema import Control.Exception (finally) -import System.Logger hiding (info) import Util.Options +import qualified System.Logger as Log +import qualified System.Logger.Extended as Log + import qualified V9 import qualified V10 import qualified V11 @@ -59,7 +61,7 @@ main = do let desc = "Brig Cassandra Schema Migrations" defaultPath = "/etc/wire/brig/conf/brig-schema.yaml" o <- getOptions desc (Just migrationOptsParser) defaultPath - l <- new $ setOutput StdOut . setFormat Nothing $ defSettings + l <- Log.mkLogger' migrateSchema l o [ V9.migration , V10.migration @@ -108,4 +110,4 @@ main = do , V56.migration , V57.migration , V58.migration - ] `finally` close l + ] `finally` Log.close l diff --git a/services/brig/src/Brig/App.hs b/services/brig/src/Brig/App.hs index a08e9d3a959..daf5f531914 100644 --- a/services/brig/src/Brig/App.hs +++ b/services/brig/src/Brig/App.hs @@ -108,6 +108,7 @@ import qualified System.FilePath as Path import qualified System.FSNotify as FS import qualified System.Logger as Log import qualified System.Logger.Class as LC +import qualified System.Logger.Extended as Log schemaVersion :: Int32 schemaVersion = 58 @@ -149,20 +150,13 @@ data Env = Env makeLenses ''Env -mkLogger :: Opts -> IO Logger -mkLogger opts = Log.new $ Log.defSettings - & Log.setLogLevel (Opt.logLevel opts) - & Log.setOutput Log.StdOut - & Log.setFormat Nothing - & Log.setNetStrings (Opt.logNetStrings opts) - newEnv :: Opts -> IO Env newEnv o = do Just md5 <- getDigestByName "MD5" Just sha256 <- getDigestByName "SHA256" Just sha512 <- getDigestByName "SHA512" mtr <- Metrics.metrics - lgr <- mkLogger o + lgr <- Log.mkLogger (Opt.logLevel o) (Opt.logNetStrings o) cas <- initCassandra o lgr mgr <- initHttpManager ext <- initExtGetManager diff --git a/services/brig/test/integration/Main.hs b/services/brig/test/integration/Main.hs index 04529a7e134..d72cd761e21 100644 --- a/services/brig/test/integration/Main.hs +++ b/services/brig/test/integration/Main.hs @@ -55,7 +55,7 @@ runTests iConf bConf otherArgs = do casKey <- optOrEnv (\v -> (Opts.cassandra v)^.casKeyspace) bConf pack "BRIG_CASSANDRA_KEYSPACE" awsOpts <- parseAWSEnv (Opts.aws <$> bConf) - lg <- Logger.new Logger.defSettings + lg <- Logger.new Logger.defSettings -- TODO: use mkLogger'? db <- defInitCassandra casKey casHost casPort lg mg <- newManager tlsManagerSettings emailAWSOpts <- parseEmailAWSOpts diff --git a/services/cannon/package.yaml b/services/cannon/package.yaml index 4491cbb3ff3..88d67653e5c 100644 --- a/services/cannon/package.yaml +++ b/services/cannon/package.yaml @@ -1,4 +1,4 @@ -defaults: +defaults: local: ../../package-defaults.yaml name: cannon version: '0.31.0' @@ -11,6 +11,7 @@ copyright: (c) 2017 Wire Swiss GmbH license: AGPL-3 dependencies: - imports +- extended library: source-dirs: src exposed-modules: diff --git a/services/cannon/src/Cannon/API.hs b/services/cannon/src/Cannon/API.hs index 0eacb0954ee..fca9eae2523 100644 --- a/services/cannon/src/Cannon/API.hs +++ b/services/cannon/src/Cannon/API.hs @@ -28,7 +28,6 @@ import Network.Wai.Utilities.Swagger import Network.Wai.Handler.Warp hiding (run) import Network.Wai.Handler.WebSockets import System.Logger (msg, val) -import System.Logger.Class (Logger) import System.Random.MWC (createSystemRandom) import qualified Cannon.Dict as D @@ -36,21 +35,16 @@ import qualified Data.ByteString.Lazy as L import qualified Data.Metrics.Middleware as Metrics import qualified Network.Wai.Middleware.Gzip as Gzip import qualified Network.WebSockets as Ws -import qualified System.Logger.Class as Logger +import qualified System.Logger as L +import qualified System.Logger.Extended as L +import qualified System.Logger.Class as LC import qualified System.IO.Strict as Strict -mkLogger :: Opts -> IO Logger -mkLogger o = Logger.new $ Logger.defSettings - & Logger.setLogLevel (o^.logLevel) - & Logger.setOutput Logger.StdOut - & Logger.setFormat Nothing - & Logger.setNetStrings (o^.logNetStrings) - run :: Opts -> IO () run o = do ext <- loadExternal m <- metrics - g <- mkLogger o + g <- L.mkLogger (o ^. logLevel) (o ^. logNetStrings) e <- mkEnv <$> pure m <*> pure ext <*> pure o @@ -64,7 +58,7 @@ run o = do measured = measureRequests m (treeToPaths rtree) app r k = runCannon e (route rtree r k) r start = measured . catchErrors g m $ Gzip.gzip Gzip.def app - runSettings s start `finally` Logger.close (applog e) + runSettings s start `finally` L.close (applog e) where idleTimeout = fromIntegral $ maxPingInterval + 3 @@ -160,11 +154,11 @@ singlePush :: Cannon L.ByteString -> PushTarget -> Cannon PushStatus singlePush notification (PushTarget usrid conid) = do let k = mkKey usrid conid d <- clients - Logger.debug $ client (key2bytes k) . msg (val "push") + LC.debug $ client (key2bytes k) . msg (val "push") c <- D.lookup k d case c of Nothing -> do - Logger.debug $ client (key2bytes k) . msg (val "push: client gone") + LC.debug $ client (key2bytes k) . msg (val "push: client gone") return PushStatusGone Just x -> do e <- wsenv diff --git a/services/cargohold/package.yaml b/services/cargohold/package.yaml index 67d75305791..001b28112ab 100644 --- a/services/cargohold/package.yaml +++ b/services/cargohold/package.yaml @@ -1,4 +1,4 @@ -defaults: +defaults: local: ../../package-defaults.yaml name: cargohold version: '1.5.0' @@ -20,6 +20,7 @@ dependencies: - data-default >=0.5 - errors >=1.4 - exceptions >=0.6 +- extended - HsOpenSSL >=0.11 - http-client >=0.4 - http-types >=0.8 diff --git a/services/cargohold/src/CargoHold/App.hs b/services/cargohold/src/CargoHold/App.hs index 88802b0d782..ffe18936cdc 100644 --- a/services/cargohold/src/CargoHold/App.hs +++ b/services/cargohold/src/CargoHold/App.hs @@ -56,6 +56,7 @@ import qualified OpenSSL.Session as SSL import qualified OpenSSL.X509.SystemStore as SSL import qualified Ropes.Aws as Aws import qualified System.Logger as Log +import qualified System.Logger.Extended as Log ------------------------------------------------------------------------------- -- Environment @@ -84,9 +85,7 @@ makeLenses ''Env newEnv :: Opts -> IO Env newEnv o = do met <- Metrics.metrics - lgr <- Log.new $ Log.setOutput Log.StdOut - . Log.setFormat Nothing - $ Log.defSettings + lgr <- Log.mkLogger' mgr <- initHttpManager awe <- initAws o lgr mgr return $ Env awe met lgr mgr def (o^.optSettings) diff --git a/services/galley/journaler/src/Main.hs b/services/galley/journaler/src/Main.hs index 20442d8fc4b..a6cc2132f09 100644 --- a/services/galley/journaler/src/Main.hs +++ b/services/galley/journaler/src/Main.hs @@ -32,7 +32,7 @@ main = withOpenSSL $ do <> fullDesc initLogger - = Log.new + = Log.new -- TODO: use mkLogger'? . Log.setOutput Log.StdOut . Log.setFormat Nothing . Log.setBufSize 0 @@ -55,7 +55,7 @@ main = withOpenSSL $ do mkAWSEnv :: JournalOpts -> IO Aws.Env mkAWSEnv o = do - l <- Log.new $ Log.setOutput Log.StdOut . Log.setFormat Nothing $ Log.defSettings + l <- Log.new $ Log.setOutput Log.StdOut . Log.setFormat Nothing $ Log.defSettings -- TODO: use mkLogger'? mgr <- initHttpManager Aws.mkEnv l mgr o diff --git a/services/galley/package.yaml b/services/galley/package.yaml index 1c9a53b08bd..e68f139ceee 100644 --- a/services/galley/package.yaml +++ b/services/galley/package.yaml @@ -1,4 +1,4 @@ -defaults: +defaults: local: ../../package-defaults.yaml name: galley version: '0.83.0' @@ -10,6 +10,7 @@ copyright: (c) 2017 Wire Swiss GmbH license: AGPL-3 dependencies: - imports +- extended - safe >=0.3 - ssl-util library: diff --git a/services/galley/schema/src/Main.hs b/services/galley/schema/src/Main.hs index 89f9c3017db..ce77b172991 100644 --- a/services/galley/schema/src/Main.hs +++ b/services/galley/schema/src/Main.hs @@ -4,7 +4,9 @@ import Imports import Cassandra.Schema import Control.Exception (finally) import Options.Applicative -import System.Logger hiding (info) + +import qualified System.Logger as Log +import qualified System.Logger.Extended as Log import qualified V20 import qualified V21 @@ -21,7 +23,7 @@ import qualified V30 main :: IO () main = do o <- execParser (info (helper <*> migrationOptsParser) desc) - l <- new $ setOutput StdOut . setFormat Nothing $ defSettings + l <- Log.mkLogger' migrateSchema l o [ V20.migration , V21.migration @@ -38,6 +40,6 @@ main = do -- 'schemaVersion' in Galley.Data ] `finally` - close l + Log.close l where desc = header "Galley Cassandra Schema" <> fullDesc diff --git a/services/galley/src/Galley/App.hs b/services/galley/src/Galley/App.hs index 985989d9faf..be2454ecca1 100644 --- a/services/galley/src/Galley/App.hs +++ b/services/galley/src/Galley/App.hs @@ -67,6 +67,7 @@ import qualified Galley.Aws as Aws import qualified Galley.Queue as Q import qualified OpenSSL.X509.SystemStore as Ssl import qualified System.Logger as Logger +import qualified System.Logger.Extended as Logger data DeleteItem = TeamItem TeamId UserId (Maybe ConnId) deriving (Eq, Ord, Show) @@ -123,16 +124,9 @@ instance MonadHttp Galley where instance HasRequestId Galley where getRequestId = view reqId -mkLogger :: Opts -> IO Logger -mkLogger opts = Logger.new $ Logger.defSettings - & Logger.setLogLevel (opts ^. optLogLevel) - & Logger.setOutput Logger.StdOut - & Logger.setFormat Nothing - & Logger.setNetStrings (opts ^. optLogNetStrings) - createEnv :: Metrics -> Opts -> IO Env createEnv m o = do - l <- mkLogger o + l <- Logger.mkLogger (o ^. optLogLevel) (o ^. optLogNetStrings) mgr <- initHttpManager o Env def m o l mgr <$> initCassandra o l <*> Q.new 16000 diff --git a/services/galley/test/integration/API/SQS.hs b/services/galley/test/integration/API/SQS.hs index 1ece9ba02f5..029bafff47c 100644 --- a/services/galley/test/integration/API/SQS.hs +++ b/services/galley/test/integration/API/SQS.hs @@ -200,6 +200,6 @@ initHttpManager = do mkAWSEnv :: JournalOpts -> IO Aws.Env mkAWSEnv opts = do - l <- L.new $ L.setOutput L.StdOut . L.setFormat Nothing $ L.defSettings + l <- L.new $ L.setOutput L.StdOut . L.setFormat Nothing $ L.defSettings -- TODO: use mkLogger'? mgr <- initHttpManager Aws.mkEnv l mgr opts diff --git a/services/gundeck/package.yaml b/services/gundeck/package.yaml index 3f942e0e6c4..ea889163da2 100644 --- a/services/gundeck/package.yaml +++ b/services/gundeck/package.yaml @@ -1,4 +1,4 @@ -defaults: +defaults: local: ../../package-defaults.yaml name: gundeck version: '1.45.0' @@ -10,6 +10,7 @@ copyright: (c) 2017 Wire Swiss GmbH license: AGPL-3 dependencies: - imports +- extended library: source-dirs: src ghc-options: diff --git a/services/gundeck/schema/src/Main.hs b/services/gundeck/schema/src/Main.hs index ae1dd4ea973..63828afeb3a 100644 --- a/services/gundeck/schema/src/Main.hs +++ b/services/gundeck/schema/src/Main.hs @@ -3,9 +3,11 @@ module Main where import Imports import Cassandra.Schema import Control.Exception (finally) -import System.Logger hiding (info) import Util.Options +import qualified System.Logger as Log +import qualified System.Logger.Extended as Log + import qualified V1 import qualified V2 import qualified V3 @@ -17,7 +19,7 @@ import qualified V7 main :: IO () main = do o <- getOptions desc (Just migrationOptsParser) defaultPath - l <- new $ setOutput StdOut . setFormat Nothing $ defSettings + l <- Log.mkLogger' migrateSchema l o [ V1.migration , V2.migration @@ -26,7 +28,7 @@ main = do , V5.migration , V6.migration , V7.migration - ] `finally` close l + ] `finally` Log.close l where desc = "Gundeck Cassandra Schema Migrations" defaultPath = "/etc/wire/gundeck/conf/gundeck-schema.yaml" diff --git a/services/gundeck/src/Gundeck/Env.hs b/services/gundeck/src/Gundeck/Env.hs index bfbf946c1ae..34cb9c314d4 100644 --- a/services/gundeck/src/Gundeck/Env.hs +++ b/services/gundeck/src/Gundeck/Env.hs @@ -14,7 +14,6 @@ import Util.Options import Gundeck.Options as Opt import Network.HTTP.Client (responseTimeoutMicro) import Network.HTTP.Client.TLS (tlsManagerSettings) -import System.Logger.Class hiding (Error, info) import qualified Cassandra as C import qualified Cassandra.Settings as C @@ -22,12 +21,13 @@ import qualified Database.Redis.IO as Redis import qualified Data.List.NonEmpty as NE import qualified Gundeck.Aws as Aws import qualified System.Logger as Logger +import qualified System.Logger.Extended as Logger data Env = Env { _reqId :: !RequestId , _monitor :: !Metrics , _options :: !Opts - , _applog :: !Logger + , _applog :: !Logger.Logger , _manager :: !Manager , _cstate :: !ClientState , _rstate :: !Redis.Pool @@ -40,16 +40,9 @@ makeLenses ''Env schemaVersion :: Int32 schemaVersion = 7 -mkLogger :: Opts -> IO Logger -mkLogger opts = Logger.new $ Logger.defSettings - & Logger.setLogLevel (opts ^. optLogLevel) - & Logger.setOutput Logger.StdOut - & Logger.setFormat Nothing - & Logger.setNetStrings (opts ^. optLogNetStrings) - createEnv :: Metrics -> Opts -> IO Env createEnv m o = do - l <- mkLogger o + l <- Logger.mkLogger (o ^. optLogLevel) (o ^. optLogNetStrings) c <- maybe (C.initialContactsPlain (o^.optCassandra.casEndpoint.epHost)) (C.initialContactsDisco "cassandra_gundeck") (unpack <$> o^.optDiscoUrl) @@ -83,6 +76,6 @@ createEnv m o = do } return $! Env def m o l n p r a io -reqIdMsg :: RequestId -> Msg -> Msg -reqIdMsg = ("request" .=) . unRequestId +reqIdMsg :: RequestId -> Logger.Msg -> Logger.Msg +reqIdMsg = ("request" Logger..=) . unRequestId {-# INLINE reqIdMsg #-} diff --git a/services/proxy/package.yaml b/services/proxy/package.yaml index fe0d646dae4..c5459360b30 100644 --- a/services/proxy/package.yaml +++ b/services/proxy/package.yaml @@ -1,4 +1,4 @@ -defaults: +defaults: local: ../../package-defaults.yaml name: proxy version: '0.9.0' @@ -10,6 +10,7 @@ copyright: (c) 2017 Wire Swiss GmbH license: AGPL-3 dependencies: - imports +- extended library: source-dirs: src ghc-options: diff --git a/services/proxy/src/Proxy/Env.hs b/services/proxy/src/Proxy/Env.hs index b982cfb4d1a..d086fbc9421 100644 --- a/services/proxy/src/Proxy/Env.hs +++ b/services/proxy/src/Proxy/Env.hs @@ -20,15 +20,15 @@ import Data.Metrics.Middleware (Metrics) import Proxy.Options import Network.HTTP.Client import Network.HTTP.Client.TLS (tlsManagerSettings) -import System.Logger.Class hiding (Error, info) import qualified System.Logger as Logger +import qualified System.Logger.Extended as Logger data Env = Env { _reqId :: !RequestId , _monitor :: !Metrics , _options :: !Opts - , _applog :: !Logger + , _applog :: !Logger.Logger , _manager :: !Manager , _secrets :: !Config , _loader :: !ThreadId @@ -38,7 +38,7 @@ makeLenses ''Env createEnv :: Metrics -> Opts -> IO Env createEnv m o = do - g <- new (setOutput StdOut . setFormat Nothing $ defSettings) + g <- Logger.mkLogger' n <- newManager tlsManagerSettings { managerConnCount = o^.httpPoolSize , managerIdleConnectionCount = 3 * (o^.httpPoolSize) @@ -49,7 +49,7 @@ createEnv m o = do return $! Env def m o g n c t where reloadError g x = - Logger.err g (msg $ val "Failed reloading config: " +++ show x) + Logger.err g (Logger.msg $ Logger.val "Failed reloading config: " Logger.+++ show x) destroyEnv :: Env -> IO () destroyEnv e = do diff --git a/services/spar/package.yaml b/services/spar/package.yaml index c3936bfe898..4210ba34542 100644 --- a/services/spar/package.yaml +++ b/services/spar/package.yaml @@ -36,6 +36,7 @@ dependencies: - email-validate - errors - exceptions # (for MonadClient, which in turn needs MonadCatch) + - extended - extra - galley-types - ghc-prim @@ -126,8 +127,10 @@ executables: - MonadRandom - random - servant-client + - silently - spar - stm + - tinylog - wai - warp-tls - xml-conduit diff --git a/services/spar/schema/src/Main.hs b/services/spar/schema/src/Main.hs index b7e22e7a17f..d8aa70ab13e 100644 --- a/services/spar/schema/src/Main.hs +++ b/services/spar/schema/src/Main.hs @@ -3,9 +3,11 @@ module Main where import Imports import Cassandra.Schema import Control.Exception (finally) -import System.Logger hiding (info) import Util.Options +import qualified System.Logger as Log +import qualified System.Logger.Extended as Log + import qualified V0 import qualified V1 import qualified V2 @@ -18,7 +20,7 @@ main = do let desc = "Spar Cassandra Schema Migrations" defaultPath = "/etc/wire/spar/conf/spar-schema.yaml" o <- getOptions desc (Just migrationOptsParser) defaultPath - l <- new $ setOutput StdOut . setFormat Nothing $ defSettings + l <- Log.mkLogger' migrateSchema l o [ V0.migration , V1.migration @@ -34,4 +36,4 @@ main = do -- effectively break the currently deployed spar service) -- see https://github.com/wireapp/wire-server/pull/476. - ] `finally` close l + ] `finally` Log.close l diff --git a/services/spar/src/Spar/Run.hs b/services/spar/src/Spar/Run.hs index 090e165fe13..f9c23f07217 100644 --- a/services/spar/src/Spar/Run.hs +++ b/services/spar/src/Spar/Run.hs @@ -7,7 +7,6 @@ -- @exec/Main.hs@, but it's just a wrapper over 'runServer'.) module Spar.Run ( initCassandra - , mkLogger , runServer ) where @@ -41,6 +40,7 @@ import qualified Network.Wai.Utilities.Server as WU import qualified SAML2.WebSSO as SAML import qualified Spar.Data as Data import qualified System.Logger as Log +import qualified System.Logger.Extended as Log ---------------------------------------------------------------------- @@ -66,17 +66,6 @@ initCassandra opts lgr = do pure cas ----------------------------------------------------------------------- --- logger - -mkLogger :: Opts -> IO Logger -mkLogger opts = Log.new $ Log.defSettings - & Log.setLogLevel (toLevel $ saml opts ^. SAML.cfgLogLevel) - & Log.setOutput Log.StdOut - & Log.setFormat Nothing - & Log.setNetStrings (logNetStrings opts) - - ---------------------------------------------------------------------- -- servant / wai / warp @@ -84,7 +73,7 @@ mkLogger opts = Log.new $ Log.defSettings -- this would create the "Listening on..." log message there, but it may also have other benefits. runServer :: Opts -> IO () runServer sparCtxOpts = do - sparCtxLogger <- mkLogger sparCtxOpts + sparCtxLogger <- Log.mkLogger (toLevel $ saml sparCtxOpts ^. SAML.cfgLogLevel) (logNetStrings sparCtxOpts) mx <- metrics sparCtxCas <- initCassandra sparCtxOpts sparCtxLogger let settings = Warp.defaultSettings & Warp.setHost (fromString shost) . Warp.setPort sport diff --git a/services/spar/test-integration/Spec.hs b/services/spar/test-integration/Spec.hs index 256d9ad8545..8e7ec3c43c0 100644 --- a/services/spar/test-integration/Spec.hs +++ b/services/spar/test-integration/Spec.hs @@ -15,13 +15,14 @@ import System.Environment (withArgs) import Test.Hspec import Util +import qualified Test.LoggingSpec import qualified Test.MetricsSpec import qualified Test.Spar.APISpec import qualified Test.Spar.AppSpec import qualified Test.Spar.DataSpec import qualified Test.Spar.Intra.BrigSpec -import qualified Test.Spar.Scim.UserSpec import qualified Test.Spar.Scim.AuthSpec +import qualified Test.Spar.Scim.UserSpec main :: IO () @@ -40,6 +41,7 @@ partitionArgs = go [] [] mkspec :: SpecWith TestEnv mkspec = do + describe "Logging" Test.LoggingSpec.spec describe "Metrics" Test.MetricsSpec.spec describe "Spar.API" Test.Spar.APISpec.spec describe "Spar.App" Test.Spar.AppSpec.spec diff --git a/services/spar/test-integration/Test/LoggingSpec.hs b/services/spar/test-integration/Test/LoggingSpec.hs new file mode 100644 index 00000000000..496e3e45ca8 --- /dev/null +++ b/services/spar/test-integration/Test/LoggingSpec.hs @@ -0,0 +1,20 @@ +module Test.LoggingSpec (spec) where + +import Imports +import Control.Lens +import Spar.App +import System.Logger as Log +import System.IO.Silently (capture) +import Util + + +spec :: HasCallStack => SpecWith TestEnv +spec = describe "logging" $ do + it "does not log newlines (see haddocks of simpleSettings)" $ do + logger <- asks (^. teSparEnv . to sparCtxLogger) + liftIO $ do + (out, _) <- capture $ do + Log.fatal logger $ Log.msg ("hrgh\n\nwoaa" :: Text) + Log.flush logger + out `shouldContain` "hrgh woaa" + out `shouldNotContain` "hrgh\n\nwoaa" diff --git a/services/spar/test-integration/Test/MetricsSpec.hs b/services/spar/test-integration/Test/MetricsSpec.hs index a56ecda46d3..e6dc3ebe9b1 100644 --- a/services/spar/test-integration/Test/MetricsSpec.hs +++ b/services/spar/test-integration/Test/MetricsSpec.hs @@ -1,9 +1,4 @@ -{-# LANGUAGE LambdaCase #-} {-# LANGUAGE NoMonomorphismRestriction #-} -{-# LANGUAGE OverloadedStrings #-} -{-# LANGUAGE ScopedTypeVariables #-} -{-# LANGUAGE TupleSections #-} -{-# LANGUAGE ViewPatterns #-} -- | See also: services/brig/test/integration/API/Metrics.hs module Test.MetricsSpec (spec) where diff --git a/services/spar/test-integration/Util/Core.hs b/services/spar/test-integration/Util/Core.hs index 02f7dd0370e..db352adb406 100644 --- a/services/spar/test-integration/Util/Core.hs +++ b/services/spar/test-integration/Util/Core.hs @@ -94,6 +94,7 @@ import Network.HTTP.Client.MultipartFormData import SAML2.WebSSO as SAML import SAML2.WebSSO.Test.Credentials import SAML2.WebSSO.Test.MockResponse +import Spar.App (toLevel) import Spar.API.Types import Spar.Run import Spar.Types @@ -125,6 +126,7 @@ import qualified Text.XML as XML import qualified Text.XML.Cursor as XML import qualified Text.XML.DSig as SAML import qualified Web.Cookie as Web +import qualified System.Logger.Extended as Log -- | Call 'mkEnv' with options from config files. @@ -169,7 +171,7 @@ cliOptsParser = (,) <$> mkEnv :: HasCallStack => IntegrationConfig -> Opts -> IO TestEnv mkEnv _teTstOpts _teOpts = do _teMgr :: Manager <- newManager defaultManagerSettings - sparCtxLogger <- mkLogger _teOpts + sparCtxLogger <- Log.mkLogger (toLevel $ saml _teOpts ^. SAML.cfgLogLevel) (logNetStrings _teOpts) _teCql :: ClientState <- initCassandra _teOpts sparCtxLogger let _teBrig = endpointToReq (cfgBrig _teTstOpts) _teGalley = endpointToReq (cfgGalley _teTstOpts) diff --git a/stack.yaml b/stack.yaml index 819a6b79a8c..2ecb35f0de2 100644 --- a/stack.yaml +++ b/stack.yaml @@ -41,6 +41,8 @@ extra-deps: commit: c03d17d656ac467350c983d5f844c199e5daceea # master (Feb 21, 2019) - git: https://github.com/wireapp/hscim commit: 42f6018812bf0f04741231b67b1f5e790ce0d489 # master (Feb 25, 2019) +- git: https://gitlab.com/fisx/tinylog + commit: 8db744579ae38ea28139af4dc87635d709761779 # https://gitlab.com/twittner/tinylog/merge_requests/6 flags: types-common: diff --git a/tools/api-simulations/loadtest/src/Main.hs b/tools/api-simulations/loadtest/src/Main.hs index 2c37896ab4e..63febe005b6 100644 --- a/tools/api-simulations/loadtest/src/Main.hs +++ b/tools/api-simulations/loadtest/src/Main.hs @@ -24,7 +24,7 @@ main = do unless (clientsMin o >= 1) $ error "invalid value for --clients: has to be at least 1" m <- newManager tlsManagerSettings - l <- Log.new Log.defSettings + l <- Log.new Log.defSettings -- TODO: use mkLogger'? e <- newBotNetEnv m l (ltsBotNetSettings o) void . runBotNet e $ do runLoadTest o diff --git a/tools/api-simulations/smoketest/src/Main.hs b/tools/api-simulations/smoketest/src/Main.hs index c61a29a7644..7469e1f8417 100644 --- a/tools/api-simulations/smoketest/src/Main.hs +++ b/tools/api-simulations/smoketest/src/Main.hs @@ -18,7 +18,7 @@ main :: IO () main = do o <- parseOptions m <- newManager tlsManagerSettings - l <- Log.new Log.defSettings + l <- Log.new Log.defSettings -- TODO: use mkLogger'? e <- newBotNetEnv m l o r <- runBotNet e $ do mainBotNet 5 diff --git a/tools/db/auto-whitelist/src/Main.hs b/tools/db/auto-whitelist/src/Main.hs index ef5b76ca62b..818024726a8 100644 --- a/tools/db/auto-whitelist/src/Main.hs +++ b/tools/db/auto-whitelist/src/Main.hs @@ -25,7 +25,7 @@ main = do <> fullDesc initLogger - = Log.new + = Log.new -- TODO: use mkLogger'? . Log.setOutput Log.StdOut . Log.setFormat Nothing . Log.setBufSize 0 diff --git a/tools/db/service-backfill/src/Main.hs b/tools/db/service-backfill/src/Main.hs index c1be19bdaf3..fe899b99e8e 100644 --- a/tools/db/service-backfill/src/Main.hs +++ b/tools/db/service-backfill/src/Main.hs @@ -26,7 +26,7 @@ main = do <> fullDesc initLogger - = Log.new + = Log.new -- TODO: use mkLogger'? . Log.setOutput Log.StdOut . Log.setFormat Nothing . Log.setBufSize 0 From 1340dfa66eb1c2bb4d52ed1050b49127685a20d9 Mon Sep 17 00:00:00 2001 From: fisx Date: Wed, 13 Mar 2019 15:38:04 +0100 Subject: [PATCH 07/23] Docs: using scim with curl. (#659) --- docs/reference/provisioning/scim-via-curl.md | 176 +++++++++++++++++++ 1 file changed, 176 insertions(+) create mode 100644 docs/reference/provisioning/scim-via-curl.md diff --git a/docs/reference/provisioning/scim-via-curl.md b/docs/reference/provisioning/scim-via-curl.md new file mode 100644 index 00000000000..ed0de6280e1 --- /dev/null +++ b/docs/reference/provisioning/scim-via-curl.md @@ -0,0 +1,176 @@ +# Using the SCIM API with curl {#RefScimViaCurl} + +_Author: Matthias Fischmann_ + +--- + +This page shows you how to communicate with the wire backend through +the [SCIM API](http://www.simplecloud.info/) by example. All examples +are [curl](https://curl.haxx.se/) (in bash syntax). + +If you want to dive into the backend code, start [reading here in our +backend](https://github.com/wireapp/wire-server/blob/develop/services/spar/src/Spar/Scim.hs) +and [our hscim library](https://github.com/wireapp/hscim). + + +## Creating an SCIM token + +First, we need a little shell environment: + +```bash +export WIRE_BACKEND=https://prod-nginz-https.wire.com +export WIRE_ADMIN=... +export WIRE_PASSWD=... +``` + +SCIM currently supports a variant of HTTP basic auth. In order to +create a token in your team, you need to authenticate using your team +admin credentials. The way this works behind the scenes in your +browser or cell phone, and in plain sight if you want to use curl, is +you need to get a wire token. + +```bash +export BEARER=$(curl -X POST \ + --header 'Content-Type: application/json' \ + --header 'Accept: application/json' \ + -d '{"email":"'"$WIRE_ADMIN"'","password":"'"$WIRE_PASSWD"'"}' \ + $WIRE_BACKEND/login'?persist=false' | jq -r .access_token) +``` + +This token will be good for 15 minutes; after that, just repeat. +(Note that SCIM requests are authenticated with a SCIM token, see +below. SCIM tokens do not expire, but need to be deleted explicitly.) + +If you don't want to install [jq](https://stedolan.github.io/jq/), you +can just call the `curl` command and copy the access token into the +shell variable manually. + +A quick test that you're logged in: + +```bash +curl -X GET --header "Authorization: Bearer $BEARER" \ + $WIRE_BACKEND/self +``` + +Now you are ready to create a SCIM token: + +```bash +export SCIM_TOKEN_FULL=$(curl -X POST \ + --header "Authorization: Bearer $BEARER" \ + --header 'Content-Type: application/json;charset=utf-8' \ + -d '{ "description": "test '"`date`"'", "password": "'"$WIRE_PASSWD"'" }' \ + $WIRE_BACKEND/scim/auth-tokens) +export SCIM_TOKEN=$(echo $SCIM_TOKEN_FULL | jq -r .token) +export SCIM_TOKEN_ID=$(echo $SCIM_TOKEN_FULL | jq -r .info.id) +``` + +... and look it up again: + +```bash +curl -X GET --header "Authorization: Bearer $BEARER" \ + $WIRE_BACKEND/scim/auth-tokens +``` + +... and delete it: + +```bash +curl -X DELETE --header "Authorization: Bearer $BEARER" \ + $WIRE_BACKEND/scim/auth-tokens?id=$SCIM_TOKEN_ID +``` + +## CRUD + +### JSON encoding of SCIM Users + +A minimal definition of a user looks like this: + +```bash +export SCIM_USER='{ + "schemas" : ["urn:ietf:params:scim:schemas:core:2.0:User"], + "externalId" : "f8c4ffde-4592-11e9-8600-afe11dc7d07b", + "userName" : "nick", + "displayName" : "The Nick" +}' +``` + +We also support custom fields that are used in rich profiles in this +form [see {#RefRichInfo}](../user/rich-info.md): + +```bash +export SCIM_USER='{ + "schemas" : ["urn:ietf:params:scim:schemas:core:2.0:User", "urn:wire:scim:schemas:profile:1.0"], + "externalId" : "f8c4ffde-4592-11e9-8600-afe11dc7d07b", + "userName" : "rnick", + "displayName" : "The Rich Nick", + "urn:wire:scim:schemas:profile:1.0": { + "richInfo": [ + { + "type": "Department", + "value": "Sales & Marketing" + }, + { + "type": "Favorite color", + "value": "Blue" + } + ] + } +}' +``` + +### create user + +```bash +export STORED_USER=$(curl -X POST \ + --header "Authorization: Bearer $SCIM_TOKEN" \ + --header 'Content-Type: application/json;charset=utf-8' \ + -d "$SCIM_USER" \ + $WIRE_BACKEND/scim/v2/Users) +export STORED_USER_ID=$(echo $STORED_USER | jq -r .id) +``` + +### get specific user + +```bash +curl -X GET \ + --header "Authorization: Bearer $SCIM_TOKEN" \ + --header 'Content-Type: application/json;charset=utf-8' \ + $WIRE_BACKEND/scim/v2/Users/$STORED_USER_ID +``` + +### get all users + +```bash +curl -X GET \ + --header "Authorization: Bearer $SCIM_TOKEN" \ + --header 'Content-Type: application/json;charset=utf-8' \ + $WIRE_BACKEND/scim/v2/Users/ +``` + +### update user + +For each put request, you need to provide the full json object. All +omitted fields will be set to `null`. (If you do not have an +up-to-date user present, just `GET` one right before the `PUT`.) + +```bash +export SCIM_USER='{ + "schemas" : ["urn:ietf:params:scim:schemas:core:2.0:User"], + "externalId" : "updated-user-id", + "userName" : "newnick", + "displayName" : "The New Nick" +}' + +curl -X PUT \ + --header "Authorization: Bearer $SCIM_TOKEN" \ + --header 'Content-Type: application/json;charset=utf-8' \ + -d "$SCIM_USER" \ + $WIRE_BACKEND/scim/v2/Users/$STORED_USER_ID +``` + +### delete user + +**Not implemented yet.** + +### groups + +**Not implemented yet.** From e8c98b7fcb5d3b793331c47f4ecc9b03e74d447d Mon Sep 17 00:00:00 2001 From: Chris Penner Date: Thu, 14 Mar 2019 10:51:26 +0100 Subject: [PATCH 08/23] Disallow duplicate external ids via SCIM update user (#657) * Assert that externalId is unused when updating users --- services/spar/src/Spar/Scim/User.hs | 27 ++++++++++++++++--- .../Test/Spar/Scim/UserSpec.hs | 26 ++++++++++++++++++ 2 files changed, 49 insertions(+), 4 deletions(-) diff --git a/services/spar/src/Spar/Scim/User.hs b/services/spar/src/Spar/Scim/User.hs index 3f9cf11600f..b54624c09fe 100644 --- a/services/spar/src/Spar/Scim/User.hs +++ b/services/spar/src/Spar/Scim/User.hs @@ -25,7 +25,6 @@ import Imports import Brig.Types.User as Brig import Control.Lens hiding ((.=), Strict) import Control.Monad.Except -import Control.Monad.Extra (whenM) import Crypto.Hash import Data.Aeson as Aeson import Data.Id @@ -244,15 +243,15 @@ createValidScimUser :: forall m. (m ~ Scim.ScimHandler Spar) => ValidScimUser -> m (Scim.StoredUser ScimUserExtra) createValidScimUser (ValidScimUser user uref handl mbName richInfo) = do + -- FUTUREWORK: The @hscim@ library checks that the handle is not taken before 'create' is -- even called. However, it does that in an inefficient manner. We should remove the check -- from @hscim@ and do it here instead. - -- Check that the UserRef is not taken. - whenM (isJust <$> lift (wrapMonadClient (Data.getUser uref))) $ - throwError Scim.conflict {Scim.detail = Just "externalId is already taken"} -- Generate a UserId will be used both for scim user in spar and for brig. buid <- Id <$> liftIO UUID.nextRandom + assertUserRefUnused buid uref + -- Create SCIM user here in spar. storedUser <- lift $ toScimStoredUser buid user lift . wrapMonadClient $ Data.insertScimUser buid storedUser @@ -296,6 +295,9 @@ updateValidScimUser tokinfo uidText newScimUser = do <- let err = throwError $ Scim.notFound "user" uidText in maybe err pure =<< Scim.get tokinfo uidText + let userRef = newScimUser ^. vsuSAMLUserRef + assertUserRefUnused uid userRef + if Scim.value (Scim.thing oldScimStoredUser) == (newScimUser ^. vsuUser) then pure oldScimStoredUser else do @@ -406,6 +408,23 @@ calculateVersion uidText usr = Scim.Weak (Text.pack (show h)) h :: Digest SHA256 h = hashlazy (Aeson.encode (Scim.WithId uidText usr)) +{-| +Check that the UserRef is not taken; or that it's taken by the given user id. + +ASSUMPTION: every scim user has a 'SAML.UserRef', and the `SAML.NameID` in it corresponds +to a single `externalId`. +-} +assertUserRefUnused :: UserId -> SAML.UserRef -> Scim.ScimHandler Spar () +assertUserRefUnused wireUserId userRef = do + mExistingUserId <- lift $ wrapMonadClient (Data.getUser userRef) + case mExistingUserId of + -- No existing user for this userRef; it's okay to set it + Nothing -> return () + -- A user exists; verify that it's the same user before updating + Just existingUserId -> + unless (existingUserId == wireUserId) $ + throwError Scim.conflict {Scim.detail = Just "externalId is already taken"} + {- TODO: might be useful later. ~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/services/spar/test-integration/Test/Spar/Scim/UserSpec.hs b/services/spar/test-integration/Test/Spar/Scim/UserSpec.hs index b11361faa86..39d61da294a 100644 --- a/services/spar/test-integration/Test/Spar/Scim/UserSpec.hs +++ b/services/spar/test-integration/Test/Spar/Scim/UserSpec.hs @@ -354,6 +354,8 @@ specUpdateUser = describe "PUT /Users/:id" $ do it "works fine when neither name nor handle are changed" $ testUpdateSameHandle it "updates the 'SAML.UserRef' index in Spar" $ testUpdateUserRefIndex it "updates the matching Brig user" $ testBrigSideIsUpdated + it "cannot update user to match another user's externalId" + testUpdateToExistingExternalIdFails context "user is from different team" $ do it "fails to update user with 404" testUserUpdateFailsWithNotFoundIfOutsideTeam context "scim_user has no entry with this id" $ do @@ -416,6 +418,30 @@ testScimSideIsUpdated = do Scim.created meta `shouldBe` Scim.created meta' Scim.location meta `shouldBe` Scim.location meta' +-- | Test that updating a user with the externalId of another user fails +testUpdateToExistingExternalIdFails :: TestSpar () +testUpdateToExistingExternalIdFails = do + -- Create a user via SCIM + (tok, _) <- registerIdPAndScimToken + user <- randomScimUser + _ <- createUser tok user + + newUser <- randomScimUser + storedNewUser <- createUser tok newUser + + let userExternalId = Scim.User.externalId user + -- Ensure we're actually generating an external ID; we may stop doing this in the future + liftIO $ userExternalId `shouldSatisfy` isJust + + -- Try to update the new user's external ID to be the same as 'user's. + let updatedNewUser = newUser{Scim.User.externalId = userExternalId} + + env <- ask + -- Should fail with 409 to denote that the given externalId is in use by a + -- different user. + updateUser_ (Just tok) (Just $ scimUserId storedNewUser) updatedNewUser (env ^. teSpar) + !!! const 409 === statusCode + -- | Test that updating still works when name and handle are not changed. -- -- This test is needed because if @PUT \/Users@ is implemented in such a way that it /always/ From 2df99d8d599385862e513f70bda70c49eb7f50cf Mon Sep 17 00:00:00 2001 From: Matthias Fischmann Date: Thu, 14 Mar 2019 15:12:22 +0100 Subject: [PATCH 09/23] Fix: tinylog dependency commit hash. --- stack.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/stack.yaml b/stack.yaml index 2ecb35f0de2..de1a29e1a6a 100644 --- a/stack.yaml +++ b/stack.yaml @@ -42,7 +42,7 @@ extra-deps: - git: https://github.com/wireapp/hscim commit: 42f6018812bf0f04741231b67b1f5e790ce0d489 # master (Feb 25, 2019) - git: https://gitlab.com/fisx/tinylog - commit: 8db744579ae38ea28139af4dc87635d709761779 # https://gitlab.com/twittner/tinylog/merge_requests/6 + commit: fd7155aaf6f090f48004a8f7857ce9d3cb4f9417 # https://gitlab.com/twittner/tinylog/merge_requests/6 flags: types-common: From a53949417eed7de82f28968f456316d587b52894 Mon Sep 17 00:00:00 2001 From: Artyom Kazak Date: Fri, 15 Mar 2019 15:47:38 +0200 Subject: [PATCH 10/23] Make an index for the docs/ (#662) --- docs/README.md | 29 +++++++++++++++++++++++++++++ docs/developer/dependencies.md | 2 +- 2 files changed, 30 insertions(+), 1 deletion(-) create mode 100644 docs/README.md diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 00000000000..7d9999a29a1 --- /dev/null +++ b/docs/README.md @@ -0,0 +1,29 @@ +(incomplete) + +# Reference documentation + +What you need to know as a user of the Wire backend: concepts, features, and API. + +## Users + +We support the following functionality related to users and user profiles: + +* [Rich info](reference/user/rich-info.md) `{#RefRichInfo}` +* TODO + +## Provisioning + +We have support for provisioning users via SCIM ([RFC 7664][], [RFC 7643][]). It's in the beta stage. + +[RFC 7664]: https://tools.ietf.org/html/rfc7664 +[RFC 7643]: https://tools.ietf.org/html/rfc7643 + +* [Using the SCIM API with curl](reference/provisioning/scim-via-curl.md) `{#RefScimViaCurl}` +* TODO + +# Developer documentation + +What you need to know as a Wire backend developer. All of these documents can and should be referenced in the code. + +* [Development setup](developer/dependencies.md) `{#DevDeps}` +* TODO diff --git a/docs/developer/dependencies.md b/docs/developer/dependencies.md index db7c06072ee..fff116e191a 100644 --- a/docs/developer/dependencies.md +++ b/docs/developer/dependencies.md @@ -1,4 +1,4 @@ -# Dependencies +# Dependencies {#DevDeps} This page documents how to install necessary dependencies to work with the wire-server code base. From 7de35751bd63ef56d46f099e1390515101247107 Mon Sep 17 00:00:00 2001 From: Artyom Kazak Date: Mon, 18 Mar 2019 16:48:37 +0100 Subject: [PATCH 11/23] Switch Cargohold to YAML-only config (#653) * Switch Cargohold to YAML-only config * ID whitelist * Hi CI --- .linting/duplicate-ids-whitelist.txt | 6 +- .../conf/cargohold.demo-docker.yaml | 3 + deploy/services-demo/conf/cargohold.demo.yaml | 3 + deploy/services-demo/demo.sh | 2 +- services/cargohold/cargohold.integration.yaml | 3 + services/cargohold/deb/etc/sv/cargohold/run | 34 +---- services/cargohold/src/CargoHold/API.hs | 2 +- services/cargohold/src/CargoHold/App.hs | 2 +- services/cargohold/src/CargoHold/Options.hs | 121 ++++-------------- services/cargohold/src/Main.hs | 3 +- services/integration.sh | 4 +- 11 files changed, 49 insertions(+), 134 deletions(-) diff --git a/.linting/duplicate-ids-whitelist.txt b/.linting/duplicate-ids-whitelist.txt index 4586c0a49f0..7db0a5822e4 100644 --- a/.linting/duplicate-ids-whitelist.txt +++ b/.linting/duplicate-ids-whitelist.txt @@ -135,8 +135,8 @@ _optAws 2 _optCassandra 2 _optDiscoUrl 2 _optGundeck 2 -_optLogLevel 2 -_optLogNetStrings 2 +_optLogLevel 3 +_optLogNetStrings 3 _optSettings 3 _options 3 _pushNativePriority 2 @@ -648,4 +648,4 @@ x3 6 zAuthAccess 2 zConn 5 zUser 6 -zauth 3 \ No newline at end of file +zauth 3 diff --git a/deploy/services-demo/conf/cargohold.demo-docker.yaml b/deploy/services-demo/conf/cargohold.demo-docker.yaml index 3a966f678f4..0290f1fc0e3 100644 --- a/deploy/services-demo/conf/cargohold.demo-docker.yaml +++ b/deploy/services-demo/conf/cargohold.demo-docker.yaml @@ -11,3 +11,6 @@ aws: settings: maxTotalBytes: 27262976 downloadLinkTTL: 300 # Seconds + +logLevel: Info +logNetStrings: false diff --git a/deploy/services-demo/conf/cargohold.demo.yaml b/deploy/services-demo/conf/cargohold.demo.yaml index 038d6d4efd8..b4200096b44 100644 --- a/deploy/services-demo/conf/cargohold.demo.yaml +++ b/deploy/services-demo/conf/cargohold.demo.yaml @@ -11,3 +11,6 @@ aws: settings: maxTotalBytes: 27262976 downloadLinkTTL: 300 # Seconds + +logLevel: Info +logNetStrings: false diff --git a/deploy/services-demo/demo.sh b/deploy/services-demo/demo.sh index ca4e9a4e82b..6479fb49f59 100755 --- a/deploy/services-demo/demo.sh +++ b/deploy/services-demo/demo.sh @@ -144,7 +144,7 @@ if [ "$docker_deployment" = "false" ]; then run_haskell_service galley ${yellow} run_haskell_service gundeck ${blue} run_haskell_service cannon ${orange} - run_haskell_service cargohold ${purpleish} Info + run_haskell_service cargohold ${purpleish} run_haskell_service proxy ${redish} Info run_haskell_service spar ${orange} run_nginz ${blueish} diff --git a/services/cargohold/cargohold.integration.yaml b/services/cargohold/cargohold.integration.yaml index 8fead3d467e..5254cf9368d 100644 --- a/services/cargohold/cargohold.integration.yaml +++ b/services/cargohold/cargohold.integration.yaml @@ -18,3 +18,6 @@ aws: settings: maxTotalBytes: 27262976 downloadLinkTTL: 300 # Seconds + +logLevel: Info +logNetStrings: false diff --git a/services/cargohold/deb/etc/sv/cargohold/run b/services/cargohold/deb/etc/sv/cargohold/run index cfadd3a301e..9f09707d3e0 100755 --- a/services/cargohold/deb/etc/sv/cargohold/run +++ b/services/cargohold/deb/etc/sv/cargohold/run @@ -3,41 +3,17 @@ set -e exec 2>&1 -APP=cargohold - # defaults -USER=${USER:-www-data} -CONFIG=${CONFIG:-/etc/$APP/.env} -HOME=${APP_HOME:-/opt/$APP} +USER=${USER:=www-data} +APP=cargohold +CONFIG=${CONFIG:-/etc/${APP}/${APP}.yaml} +HOME=${APP_HOME:=/opt/$APP} BIN=$HOME/bin/$APP -# we need KHAN_DOMAIN before sourcing $CONFIG -source <(khan --silent metadata --multiline) - if [ ! -f $CONFIG ]; then exec chpst -u $USER get_config; fi -source $CONFIG - -AWS_ACCESS_KEY_ID=${CARGOHOLD_AWS_ACCESS_KEY_ID:+--aws-key-id=$CARGOHOLD_AWS_ACCESS_KEY_ID} -AWS_SECRET_ACCESS_KEY=${CARGOHOLD_AWS_SECRET_ACCESS_KEY:+--aws-secret-key=$CARGOHOLD_AWS_SECRET_ACCESS_KEY} - -export LOG_LEVEL=${CARGOHOLD_LOG_LEVEL:-Info} -export LOG_BUFFER=${CARGOHOLD_LOG_BUFFER:-4096} -export LOG_NETSTR=${CARGOHOLD_LOG_NETSTR:-True} cd $HOME ulimit -n 65536 -exec chpst -u $USER \ - $BIN \ - --host=${CARGOHOLD_HOST:-'127.0.0.1'} \ - --port=${CARGOHOLD_PORT?'unset'} \ - ${AWS_ACCESS_KEY_ID} \ - ${AWS_SECRET_ACCESS_KEY} \ - --aws-s3-endpoint=${CARGOHOLD_AWS_S3_ENDPOINT:-'https://s3.eu-west-1.amazonaws.com'} \ - --aws-s3-bucket=${CARGOHOLD_AWS_S3_BUCKET?'unset'} \ - --aws-cloudfront-domain=${CARGOHOLD_AWS_CLOUDFRONT_DOMAIN?'unset'} \ - --aws-cloudfront-keypair-id=${CARGOHOLD_AWS_CLOUDFRONT_KEYPAIR_ID?'unset'} \ - --aws-cloudfront-private-key=${CARGOHOLD_AWS_CLOUDFRONT_PRIVATEKEY?'unset'} \ - --max-total-bytes=${CARGOHOLD_MAX_TOTAL_BYTES?'unset'} \ - --download-link-ttl=${CARGOHOLD_DOWNLOAD_LINK_TTL:-300} +exec chpst -u $USER $BIN --config-file=${CONFIG} diff --git a/services/cargohold/src/CargoHold/API.hs b/services/cargohold/src/CargoHold/API.hs index bbc6042b559..b17a5089405 100644 --- a/services/cargohold/src/CargoHold/API.hs +++ b/services/cargohold/src/CargoHold/API.hs @@ -1,4 +1,4 @@ -module CargoHold.API (runServer, parseOptions) where +module CargoHold.API (runServer) where import Imports hiding (head) import CargoHold.App diff --git a/services/cargohold/src/CargoHold/App.hs b/services/cargohold/src/CargoHold/App.hs index ffe18936cdc..82e730d73d1 100644 --- a/services/cargohold/src/CargoHold/App.hs +++ b/services/cargohold/src/CargoHold/App.hs @@ -85,7 +85,7 @@ makeLenses ''Env newEnv :: Opts -> IO Env newEnv o = do met <- Metrics.metrics - lgr <- Log.mkLogger' + lgr <- Log.mkLogger (o^.optLogLevel) (o^.optLogNetStrings) mgr <- initHttpManager awe <- initAws o lgr mgr return $ Env awe met lgr mgr def (o^.optSettings) diff --git a/services/cargohold/src/CargoHold/Options.hs b/services/cargohold/src/CargoHold/Options.hs index 18d7eefe32a..3043dad1afb 100644 --- a/services/cargohold/src/CargoHold/Options.hs +++ b/services/cargohold/src/CargoHold/Options.hs @@ -4,30 +4,41 @@ module CargoHold.Options where import Imports import CargoHold.CloudFront (Domain (..), KeyPairId (..)) -import Control.Lens +import Control.Lens hiding (Level) import Data.Aeson.TH -import Options.Applicative +import System.Logger (Level) import Util.Options import Util.Options.Common -import qualified Data.Text as T import qualified Ropes.Aws as Aws +-- | AWS CloudFront settings. data CloudFrontOpts = CloudFrontOpts - { _cfDomain :: Domain - , _cfKeyPairId :: KeyPairId - , _cfPrivateKey :: FilePath + { _cfDomain :: Domain -- ^ Domain + , _cfKeyPairId :: KeyPairId -- ^ Keypair ID + , _cfPrivateKey :: FilePath -- ^ Path to private key } deriving (Show, Generic) deriveFromJSON toOptionFieldName ''CloudFrontOpts makeLenses ''CloudFrontOpts data AWSOpts = AWSOpts - { _awsKeyId :: !(Maybe Aws.AccessKeyId) + { + -- | Key ID; if 'Nothing', will be taken from the environment or from instance metadata + -- (when running on an AWS instance) + _awsKeyId :: !(Maybe Aws.AccessKeyId) + -- | Secret key , _awsSecretKey :: !(Maybe Aws.SecretAccessKey) + -- | S3 endpoint , _awsS3Endpoint :: !AWSEndpoint + -- | S3 endpoint for generating download links. Useful if Cargohold is configured to use + -- an S3 replacement running inside the internal network (in which case internally we + -- would use one hostname for S3, and when generating an asset link for a client app, we + -- would use another hostname). , _awsS3DownloadEndpoint :: !(Maybe AWSEndpoint) + -- | S3 bucket name , _awsS3Bucket :: !Text + -- | AWS CloudFront options , _awsCloudFront :: !(Maybe CloudFrontOpts) } deriving (Show, Generic) @@ -35,7 +46,10 @@ deriveFromJSON toOptionFieldName ''AWSOpts makeLenses ''AWSOpts data Settings = Settings - { _setMaxTotalBytes :: !Int + { + -- | Maximum allowed size for uploads, in bytes + _setMaxTotalBytes :: !Int + -- | TTL for download links, in seconds , _setDownloadLinkTTL :: !Word } deriving (Show, Generic) @@ -43,95 +57,14 @@ deriveFromJSON toOptionFieldName ''Settings makeLenses ''Settings data Opts = Opts - { _optCargohold :: !Endpoint + { _optCargohold :: !Endpoint -- ^ Hostname and port to bind to , _optAws :: !AWSOpts , _optSettings :: !Settings + -- Logging + , _optLogLevel :: !Level -- ^ Log level (Debug, Info, etc) + , _optLogNetStrings :: !Bool -- ^ Use netstrings encoding: + -- } deriving (Show, Generic) deriveFromJSON toOptionFieldName ''Opts makeLenses ''Opts - -parseOptions :: IO Opts -parseOptions = execParser (info (helper <*> optsParser) desc) - where - desc = header "CargoHold - Asset Service" <> fullDesc - -optsParser :: Parser Opts -optsParser = Opts <$> - (Endpoint <$> - (textOption $ - long "host" - <> value "*4" - <> showDefault - <> metavar "HOSTNAME" - <> help "Hostname or address to bind to") - <*> - (option auto $ - long "port" - <> short 'p' - <> metavar "PORT" - <> help "Port to listen on")) - <*> awsParser - <*> settingsParser - where - cloudFrontParser :: Parser CloudFrontOpts - cloudFrontParser = CloudFrontOpts <$> - (fmap Domain . textOption $ - long "aws-cloudfront-domain" - <> metavar "STRING" - <> help "AWS CloudFront Domain") - - <*> (fmap KeyPairId . textOption $ - long "aws-cloudfront-keypair-id" - <> metavar "STRING" - <> help "AWS CloudFront Keypair ID") - - <*> strOption - (long "aws-cloudfront-private-key" - <> metavar "FILE" - <> help "AWS CloudFront Private Key") - - awsParser :: Parser AWSOpts - awsParser = AWSOpts <$> - (optional . fmap Aws.AccessKeyId . bytesOption $ - long "aws-key-id" - <> metavar "STRING" - <> help "AWS Access Key ID") - <*> (optional . fmap Aws.SecretAccessKey . bytesOption $ - long "aws-secret-key" - <> metavar "STRING" - <> help "AWS Secret Access Key") - - <*> (option parseAWSEndpoint $ - long "aws-s3-endpoint" - <> value (AWSEndpoint "s3.eu-west-1.amazonaws.com" True 443) - <> metavar "STRING" - <> showDefault - <> help "aws S3 endpoint") - - <*> optional (option parseAWSEndpoint $ - long "aws-s3-download-endpoint" - <> metavar "STRING" - <> showDefault - <> help "aws S3 endpoint used for generating download links") - - <*> (fmap T.pack . strOption $ - long "aws-s3-bucket" - <> metavar "STRING" - <> help "S3 bucket name") - <*> optional cloudFrontParser - - settingsParser :: Parser Settings - settingsParser = Settings <$> - option auto - (long "max-total-bytes" - <> metavar "INT" - <> value (25 * 1024 * 1024) - <> showDefault - <> help "Maximum allowed size in bytes for uploads") - <*> option auto - (long "download-link-ttl" - <> metavar "INT" - <> value 300 - <> showDefault - <> help "TTL for download links in seconds") diff --git a/services/cargohold/src/Main.hs b/services/cargohold/src/Main.hs index c1eb8522c07..bcdf074f2fc 100644 --- a/services/cargohold/src/Main.hs +++ b/services/cargohold/src/Main.hs @@ -4,12 +4,11 @@ import Imports import CargoHold.API import OpenSSL (withOpenSSL) -import CargoHold.Options import Util.Options main :: IO () main = withOpenSSL $ do - options <- getOptions desc (Just optsParser) defaultPath + options <- getOptions desc Nothing defaultPath runServer options where desc = "Cargohold - Asset Storage" diff --git a/services/integration.sh b/services/integration.sh index bd246fa011f..80f8bdb240f 100755 --- a/services/integration.sh +++ b/services/integration.sh @@ -77,8 +77,6 @@ function run() { service=$1 instance=$2 colour=$3 - # TODO can be removed once all services have been switched to YAML configs - [ $# -gt 3 ] && export LOG_LEVEL=$4 ( ( cd "${DIR}/${service}" && "${TOP_LEVEL}/dist/${service}" -c "${service}${instance}.integration${integration_file_extension}" ) || kill_all) \ | sed -e "s/^/$(tput setaf ${colour})[${service}] /" -e "s/$/$(tput sgr0)/" & } @@ -90,7 +88,7 @@ run galley "" ${yellow} run gundeck "" ${blue} run cannon "" ${orange} run cannon "2" ${orange} -run cargohold "" ${purpleish} Info +run cargohold "" ${purpleish} run spar "" ${orange} # the ports are copied from ./integration.yaml From 42234270e6c9bfab69e8ee93b9b6ed79b66ff8e6 Mon Sep 17 00:00:00 2001 From: Julia Longtin Date: Tue, 19 Mar 2019 13:27:05 +0100 Subject: [PATCH 12/23] docker image building for all of the docker images our integration tests require. (#622) * build our docker dependencies for multiple archetectures. * blacklist builds that do not work on some emulators, fix all of the git checkouts to specified commits, add minio, bump the tini version, and fix fakesqs and localstack. * break documentation out of the Makefile. * add another note. * run through the process on another laptop, fleshing out the README. also, amd64 support. * missing dollar sign. * take feedback into account, and add some sanity checks. * fix missing comma, that was making variants not be used for arm. * wire name * take into account creating the dockerhub repositories. * add cassandra, and provide ways to decrease memory usage by java images, along with race condition fixes. * typo fix. * Update deploy/docker-ephemeral/build/Makefile Co-Authored-By: julialongtin * seperate out two ways of using this. * add a note about docker image upload being optional. * use new cassandra and elasticsearch images, to save multiple gigs of ram when running integration tests. * switch to https access instead of ssh access. * make the list of arches overridable. * no -i.bak on macs. * allow sed to be parameterized, for macs. * correct inline mistake. * initial revision of pull request documentation. * change to markdown, continue adding. * more. * more. * even more boxes. * add an extra empty line at the end of file. * remove extra line. * simplify make rules, and spacing changes. * almost done... * only two rules left... * Done. * merge changes from fisx * make instructions a bit more clear, and use the manifest for cassandra, instead of the architecture specific image name. * switch to hand built images for most images. * rm trailing whitespace, normalize tabs. * review * take review comments into account. * take review comments into account. * rename document to adhere to naming conventions for markdown files. * try an environment variable to leave elasticsearch in development mode. --- README.md | 2 + deploy/docker-ephemeral/build/Makefile | 281 +++++ deploy/docker-ephemeral/build/README.md | 54 + deploy/docker-ephemeral/docker-compose.yaml | 35 +- docs/developer/dependencies.md | 7 + docs/reference/make-docker-and-qemu.md | 1072 +++++++++++++++++++ 6 files changed, 1443 insertions(+), 8 deletions(-) create mode 100644 deploy/docker-ephemeral/build/Makefile create mode 100644 deploy/docker-ephemeral/build/README.md create mode 100644 docs/reference/make-docker-and-qemu.md diff --git a/README.md b/README.md index 911c1ae46d7..4ca5538a1e5 100644 --- a/README.md +++ b/README.md @@ -136,6 +136,8 @@ Integration tests require all of the haskell services (brig, galley, cannon, gun - SNS - S3 - DynamoDB +- Required additional software: + - netcat (in order to allow the services being tested to talk to the dependencies above) Setting up these real, but in-memory internal and "fake" external dependencies is done easiest using [`docker-compose`](https://docs.docker.com/compose/install/). Run the following in a separate terminal (it will block that terminal, C-c to shut all these docker images down again): diff --git a/deploy/docker-ephemeral/build/Makefile b/deploy/docker-ephemeral/build/Makefile new file mode 100644 index 00000000000..c0d254bd185 --- /dev/null +++ b/deploy/docker-ephemeral/build/Makefile @@ -0,0 +1,281 @@ +# use DOCKER_ so we allow users to pass in values without conflicting with USERNAME, EMAIL, or somesuch already in their environments. +DOCKER_USERNAME ?= wireserver +DOCKER_REALNAME ?= Wire +DOCKER_EMAIL ?= backend@wire.com +TAGNAME ?= :0.0.9 + +# shorten the variable names above, to make the make rules below a little clearer to read. +USERNAME := $(DOCKER_USERNAME) +REALNAME := $(DOCKER_REALNAME) +EMAIL := $(DOCKER_EMAIL) + +# the distribution we're going to build for. this can be either DEBIAN or ALPINE. +DIST ?= DEBIAN + +# these are docker architecture names, not debian. +STRETCHARCHES := arm32v5 arm32v7 386 amd64 arm64v8 ppc64le s390x +JESSIEARCHES := arm32v5 arm32v7 386 amd64 +# the arches that our images based on debian support. +# note that we only care about the pi, the 386, and amd64 for now. +DEBARCHES := arm32v5 arm32v7 386 amd64 + +# the names of the docker images we're building that are based on debian jessie. +JESSIENAMES := airdock_fakesqs airdock_rvm airdock_base smtp +# the names of the docker images we're building that are based on debian stretch. +STRETCHNAMES := dynamodb_local cassandra +# the names of the docker images that we're building that are based on debian. +DEBNAMES := $(JESSIENAMES) $(STRETCHNAMES) + +# the arches that we build for alpine. +ALPINEARCHES := amd64 386 arm32v6 +# images we build that are based on alpine. +ALPINENAMES := elasticsearch java_maven_node_python localstack minio + +# dependencies between docker images. - +PREBUILDS := airdock_rvm-airdock_base airdock_fakesqs-airdock_rvm localstack-java_maven_node_python + +# manifest files don't work for these when they are finding the image they are based on. +# by adding the name of the docker image here, we use the image:tag- format, instead of /image:tag. +NOMANIFEST := airdock_rvm airdock_fakesqs localstack + +# convert from debian architecture string to docker architecture string. +dockerarch=$(patsubst i%,%,$(patsubst armel,arm32v5,$(patsubst armhf,arm32v7,$(patsubst arm64,arm64v8,$(1))))) + +# the local architecture, in debian format. (i386, amd64, armel, armhf, arm64, ..) +LOCALDEBARCH := $(shell [ ! -z `which dpkg` ] && dpkg --print-architecture) +# the local architecture, in docker format. (386, amd64, arm32v5, arm32v7, arm64v8, ...) +LOCALARCH ?= $(call dockerarch,$(LOCALDEBARCH)) + +ifeq ($(LOCALARCH),) + $(error LOCALARCH is empty, you may need to supply it.) +endif + +# FIXME: make this a section that depends on LOCALARCH, so we can allow these images to be built on native arm32. +# FIXME: what's up with dynamodb? +# note that qemu's x86_64 support is not strong enough to cross-build most things on i386. +# these targets won't build on the system emulators for these arches. working with the qemu team to fix. they think it might be https://bugs.launchpad.net/qemu/+bug/1813398 . +BADARCHSIM := localstack-arm32v6 java_maven_node_python-arm32v6 dynamodb_local-386 + +# set the targets, depending on the distro base specified. this is so that the debian images are built for all of the debian arches, and the alpine images for its arches. +ifeq ($(DIST),DEBIAN) + ARCHES ?= $(DEBARCHES) + NAMES ?= $(DEBNAMES) +endif +ifeq ($(DIST),ALPINE) + ARCHES ?= $(ALPINEARCHES) + NAMES ?= $(ALPINENAMES) +endif + +# which sed to use. GNU-SED for macs. +SED ?= sed + +# turn on experimental features in docker. +export DOCKER_CLI_EXPERIMENTAL=enabled + +# allow for us to (ab)use $$* in dependencies of rules. +.SECONDEXPANSION: + +# disable make's default builtin rules, to make debugging output cleaner. +MAKEFLAGS += --no-builtin-rules + +# make sure we use bash. for proper quoting when inserting JVM_OPTIONS snippet. +SHELL = bash + +# empty out the default suffix list, to make debugging output cleaner. +.SUFFIXES: + +# too much haskell. returns first or second from -, respectively. +fst=$(word 1, $(subst -, ,$(1))) +snd=$(word 2, $(subst -, ,$(1))) + +# filter the list of architectures, removing architectures that we know do not work for a given docker image. +goodarches=$(filter-out $(call snd,$(foreach arch,$(ARCHES),$(filter $(1)-$(arch),$(BADARCHSIM)))),$(ARCHES)) +# filter the list of names, returning only names that have no pre-dependencies. +nodeps=$(filter-out $(foreach target,$(NAMES),$(call snd,$(foreach dependency,$(NAMES),$(filter $(target)-$(dependency),$(PREBUILDS))))),$(NAMES)) + +# the three entry points we expect users to use. all by default, to create and upload either debian or alpine images, build-, to build a single image (for all arches, but without the manifest), push- to build a single image, push the image, build it's manifest, and push it to dockerhub. +all: $(foreach image,$(nodeps),manifest-push-$(image)) + +# build- +build-%: $$(foreach arch,$$(call goodarches,%),create-$$(arch)-$$*) + @echo -n + +.PHONY: build-all +build-all: $(foreach image,$(nodeps),build-$(image)) + +# push- +push-%: manifest-push-% + @echo -n + +.PHONY: +push-all: $(foreach image,$(nodeps),manifest-push-$(image)) + +# manifests use a slightly different form of architecture name than docker itsself. arm instead of arm32, and a seperate variant field. +maniarch=$(patsubst %32,%,$(call fst,$(subst v, ,$(1)))) +# seperate and use the variant, if it is part of the architecture name. +manivariant=$(foreach variant,$(word 2, $(subst v, ,$(1))), --variant $(variant)) + +# manifest-push- +manifest-push-%: $$(foreach arch,$$(call goodarches,$$*), manifest-annotate-$$(arch)-$$*) + docker manifest push $(USERNAME)/$*$(TAGNAME) + +#manifest-annotate-- +manifest-annotate-%: manifest-create-$$(call snd,$$*) + docker manifest annotate $(USERNAME)/$(call snd,$*)$(TAGNAME) $(USERNAME)/$(call snd,$*)$(TAGNAME)-$(call fst,$*) --arch $(call maniarch,$(call fst,$*)) $(call manivariant,$(call fst,$*)) + +#manifest-create- +manifest-create-%: $$(foreach arch,$$(call goodarches,%), upload-$$(arch)-$$*) + docker manifest create $(USERNAME)/$*$(TAGNAME) $(patsubst %,$(USERNAME)/$*$(TAGNAME)-%,$(call goodarches,$*)) --amend + +# upload-- +upload-%: create-% $$(foreach predep,$$(filter $$(call snd,%)-%,$$(PREBUILDS)), dep-upload-$$(call fst,$$*)-$$(call snd,$$(predep))) + docker push $(USERNAME)/$(call snd,$*)$(TAGNAME)-$(call fst,$*) | cat + +dep-upload-%: create-% $$(foreach predep,$$(filter $$(call snd,%)-%,$$(PREBUILDS)), dep-subupload-$$(call fst,$$*)-$$(call snd,$$(predep))) + docker push $(USERNAME)/$(call snd,$*)$(TAGNAME)-$(call fst,$*) | cat + +dep-subupload-%: create-% + docker push $(USERNAME)/$(call snd,$*)$(TAGNAME)-$(call fst,$*) | cat + +# create-- +create-%: Dockerfile-$$(foreach target,$$(filter $$(call snd,$$*),$(NOMANIFEST)),NOMANIFEST-)$$* $$(foreach predep,$$(filter $$(call snd,%)-%,$(PREBUILDS)), depend-create-$$(call fst,$$*)-$$(call snd,$$(predep))) + cd $(call snd,$*) && docker build -t $(USERNAME)/$(call snd,$*)$(TAGNAME)-$(call fst,$*) -f Dockerfile-$(call fst,$*) . | cat + +depend-create-%: Dockerfile-$$(foreach target,$$(filter $$(call snd,$$*),$(NOMANIFEST)),NOMANIFEST-)$$* $$(foreach predep,$$(filter $$(call snd,%)-%,$(PREBUILDS)), depend-subcreate-$$(call fst,$$*)-$$(call snd,$$(predep))) + cd $(call snd,$*) && docker build -t $(USERNAME)/$(call snd,$*)$(TAGNAME)-$(call fst,$*) -f Dockerfile-$(call fst,$*) . | cat + +depend-subcreate-%: Dockerfile-$$(foreach target,$$(filter $$(call snd,$$*),$(NOMANIFEST)),NOMANIFEST-)$$* + cd $(call snd,$*) && docker build -t $(USERNAME)/$(call snd,$*)$(TAGNAME)-$(call fst,$*) -f Dockerfile-$(call fst,$*) . | cat + +# with a broken manifest(our images, either docker or local), we have to use a postfix to request docker images other than the one for our native architecture. +archpostfix=$(foreach arch,$(filter-out $(filter-out $(word 3, $(subst -, ,$(filter $(call snd,$(1))-%-$(call fst,$(1)),$(foreach prebuild,$(PREBUILDS),$(prebuild)-$(call fst,$(1)))))),$(LOCALARCH)),$(call fst,$(1))),-$(arch)) +# with working manifest (official images from docker built correctry), we have to use a path when requesting docker images other than the one for our native architecture. +archpath=$(foreach arch,$(patsubst 386,i386,$(filter-out $(LOCALARCH),$(1))),$(arch)/) + +# handle cases where a manifest file is not being respected, and we have to use :- format. +# Dockerfile-NOMANIFEST-- +Dockerfile-NOMANIFEST-%: $$(call snd,%)/Dockerfile + cd $(call snd,$*) && cat Dockerfile | ${SED} "s/^\(MAINTAINER\).*/\1 $(REALNAME) \"$(EMAIL)\"/" | ${SED} "s=^\(FROM \)\(.*\)$$=\1\2$(call archpostfix,$*)=" > Dockerfile-$(call fst,$*) + +# handle situations where a manifest is present in upstream, and available as /: +# Dockerfile-- +Dockerfile-%: $$(call snd,%)/Dockerfile + cd $(call snd,$*) && cat Dockerfile | ${SED} "s/^\(MAINTAINER\).*/\1 $(REALNAME) \"$(EMAIL)\"/" | ${SED} "s=^\(FROM \)\(.*\)$$=\1$(call archpath,$(call fst,$*))\2=" > Dockerfile-$(call fst,$*) + +# real files, finally! + +# define commit IDs for the versions we're using. +SMTP_COMMIT ?= 8ad8b849855be2cb6a11d97d332d27ba3e47483f +DYNAMODB_COMMIT ?= c1eabc28e6d08c91672ff3f1973791bca2e08918 +ELASTICSEARCH_COMMIT ?= 06779bd8db7ab81d6706c8ede9981d815e143ea3 +AIRDOCKBASE_COMMIT ?= 692625c9da3639129361dc6ec4eacf73f444e98d +AIRDOCKRVM_COMMIT ?= cdc506d68b92fa4ffcc7c32a1fc7560c838b1da9 +AIRDOCKFAKESQS_COMMIT ?= 9547ca5e5b6d7c1b79af53e541f8940df09a495d +JAVAMAVENNODEPYTHON_COMMIT ?= 645af21162fffd736c93ab0047ae736dc6881959 +LOCALSTACK_COMMIT ?= 645af21162fffd736c93ab0047ae736dc6881959 +MINIO_COMMIT ?= 118270d76fc90f1e54cd9510cee9688bd717250b +CASSANDRA_COMMIT ?= 064fb4e2682bf9c1909e4cb27225fa74862c9086 + +smtp/Dockerfile: + git clone https://github.com/namshi/docker-smtp.git smtp + cd smtp && git reset --hard $(SMTP_COMMIT) + +dynamodb_local/Dockerfile: + git clone https://github.com/cnadiminti/docker-dynamodb-local.git dynamodb_local + cd dynamodb_local && git reset --hard $(DYNAMODB_COMMIT) + +elasticsearch/Dockerfile: + git clone https://github.com/blacktop/docker-elasticsearch-alpine.git elasticsearch-all + cd elasticsearch-all && git reset --hard $(ELASTICSEARCH_COMMIT) + cp -R elasticsearch-all/5.6/ elasticsearch + # add a block to the entrypoint script to interpret CS_JVM_OPTIONS, modifying the jvm.options before launching elasticsearch. + # first, add a marker to be replaced before the last if. + ${SED} -i.bak -r ':a;$$!{N;ba};s/^(.*)(\n?)fi/\2\1fi\nREPLACEME/' elasticsearch/elastic-entrypoint.sh + # next, load our variables. + ${SED} -i.bak 's@REPLACEME@MY_APP_CONFIG="/usr/share/elasticsearch/config/"\n&@' elasticsearch/elastic-entrypoint.sh + # add our parser and replacer. + ${SED} -i.bak $$'s@REPLACEME@if [ ! -z "$${JVM_OPTIONS_ES}" ]; then\\nfor x in $${JVM_OPTIONS_ES}; do { l="$${x%%=*}"; r=""; e=""; [ "$$x" != "$${x/=//}" ] \&\& e="=" \&\& r="$${x##*=}"; [ "$$x" != "$${x##-Xm?}" ] \&\& r="$${x##-Xm?}" \&\& l="$${x%%$$r}"; echo $$l $$e $$r; sed -i.bak -r \'s/^[# ]?(\'"$$l$$e"\').*/\\\\1\'"$$r"\'/\' "$$MY_APP_CONFIG/jvm.options"; diff "$$MY_APP_CONFIG/jvm.options.bak" "$$MY_APP_CONFIG/jvm.options" \&\& echo "no difference"; } done;\\nfi\\n&@' elasticsearch/elastic-entrypoint.sh + # remove the marker we added earlier. + ${SED} -i.bak 's@REPLACEME@@' elasticsearch/elastic-entrypoint.sh + +airdock_base/Dockerfile: + git clone https://github.com/airdock-io/docker-base.git airdock_base-all + cd airdock_base-all && git reset --hard $(AIRDOCKBASE_COMMIT) + cp -R airdock_base-all/jessie airdock_base + # work around go compiler bug by using newer version of GOSU. https://bugs.launchpad.net/qemu/+bug/1696353 + ${SED} -i.bak "s/GOSU_VERSION=.* /GOSU_VERSION=1.11 /" $@ + # work around missing architecture specific binaries in earlier versions of tini. + ${SED} -i.bak "s/TINI_VERSION=.*/TINI_VERSION=v0.16.1/" $@ + # work around the lack of architecture usage when downloading tini binaries. https://github.com/airdock-io/docker-base/issues/8 + ${SED} -i.bak 's/tini\(.asc\|\)"/tini-\$$dpkgArch\1"/' $@ + +airdock_rvm/Dockerfile: + git clone https://github.com/airdock-io/docker-rvm.git airdock_rvm-all + cd airdock_rvm-all && git reset --hard $(AIRDOCKRVM_COMMIT) + cp -R airdock_rvm-all/jessie-rvm airdock_rvm + ${SED} -i.bak "s=airdock/base:jessie=$(USERNAME)/airdock_base$(TAGNAME)=" $@ + # add a second key used to sign ruby to the dockerfile. https://github.com/airdock-io/docker-rvm/issues/1 + ${SED} -i.bak "s=\(409B6B1796C275462A1703113804BB82D39DC0E3\)=\1 7D2BAF1CF37B13E2069D6956105BD0E739499BDB=" $@ + +airdock_fakesqs/Dockerfile: + git clone https://github.com/airdock-io/docker-fake-sqs.git airdock_fakesqs-all + cd airdock_fakesqs-all && git reset --hard $(AIRDOCKFAKESQS_COMMIT) + cp -R airdock_fakesqs-all/0.3.1 airdock_fakesqs + ${SED} -i.bak "s=airdock/rvm:latest=$(USERNAME)/airdock_rvm$(TAGNAME)=" $@ + # add a workdir declaration to the final switch to root. + ${SED} -i.bak "s=^USER root=USER root\nWORKDIR /=" $@ + # break directory creation into two pieces, one run by root. + ${SED} -i.bak "s=^USER ruby=USER root=" $@ + ${SED} -i.bak "s=cd /srv/ruby/fake-sqs.*=chown ruby.ruby /srv/ruby/fake-sqs\nUSER ruby\nWORKDIR /srv/ruby/fake-sqs\nRUN cd /srv/ruby/fake-sqs \&\& \\\\=" $@ + +java_maven_node_python/Dockerfile: + git clone https://github.com/localstack/localstack.git java_maven_node_python + cd java_maven_node_python && git reset --hard $(JAVAMAVENNODEPYTHON_COMMIT) + cd java_maven_node_python && mv bin/Dockerfile.base Dockerfile + # disable installing docker-ce. not available on many architectures in binary form. + ${SED} -i.bak "/.*install Docker.*/{N;N;N;N;N;d}" $@ + +localstack/Dockerfile: + git clone https://github.com/localstack/localstack.git localstack + cd localstack && git reset --hard $(LOCALSTACK_COMMIT) + ${SED} -i.bak "s=localstack/java-maven-node-python=$(USERNAME)/java_maven_node_python$(TAGNAME)=" $@ + # skip tests. they take too long. + ${SED} -i.bak "s=make lint.*=make lint=" localstack/Makefile + ${SED} -i.bak "s=\(.*lambda.*\)=#\1=" localstack/Makefile + +minio/Dockerfile: + git clone https://github.com/minio/minio.git minio + cd minio && git reset --hard $(MINIO_COMMIT) + +cassandra/Dockerfile: + git clone https://github.com/docker-library/cassandra.git cassandra-all + cd cassandra-all && git reset --hard $(CASSANDRA_COMMIT) + cp -R cassandra-all/3.11 cassandra + # work around go compiler bug by using newer version of GOSU. https://bugs.launchpad.net/qemu/+bug/1696353 + ${SED} -i.bak "s/GOSU_VERSION .*/GOSU_VERSION 1.11/" $@ + # add a block to the entrypoint script to interpret CS_JVM_OPTIONS, modifying the jvm.options before launching cassandra. + # first, add a marker to be replaced before the last if. + ${SED} -i.bak -r ':a;$$!{N;ba};s/^(.*)(\n?)fi/\2\1REPLACEME\nfi/' cassandra/docker-entrypoint.sh + # next, load our variables. + ${SED} -i.bak 's/REPLACEME/\nAPP_CONFIG="$$CASSANDRA_CONFIG"\n&/' cassandra/docker-entrypoint.sh + ${SED} -i.bak 's/REPLACEME/JVM_OPTIONS="$$CS_JVM_OPTIONS"\n&/' cassandra/docker-entrypoint.sh + # add our parser and replacer. + ${SED} -i.bak $$'s@REPLACEME@if [ ! -z "$${JVM_OPTIONS}" ]; then\\nfor x in $${JVM_OPTIONS}; do { l="$${x%%=*}"; r=""; e=""; [ "$$x" != "$${x/=//}" ] \&\& e="=" \&\& r="$${x##*=}"; [ "$$x" != "$${x##-Xm?}" ] \&\& r="$${x##-Xm?}" \&\& l="$${x%%$$r}"; echo $$l $$e $$r; _sed-in-place "$$APP_CONFIG/jvm.options" -r \'s/^[# ]*(\'"$$l$$e"\').*/\\\\1\'"$$r"\'/\'; } done\\nfi\\n&@' cassandra/docker-entrypoint.sh + # remove the marker we added earlier. + ${SED} -i.bak 's@REPLACEME@@' cassandra/docker-entrypoint.sh + +# cleanup. remove the directories we set up for building, as well as the git repos we download. +.PHONY: clean +clean: + rm -rf elasticsearch-all airdock_base-all airdock_rvm-all airdock_fakesqs-all cassandra-all $(DEBNAMES) $(ALPINENAMES) + +.PHONY: cleandocker +cleandocker: + docker rm $$(docker ps -a -q) || true + docker rmi $$(docker images -q) --force || true + +names: + @echo Debian based images: + @echo $(DEBNAMES) + @echo Alpine based images: + @echo $(ALPINENAMES) diff --git a/deploy/docker-ephemeral/build/README.md b/deploy/docker-ephemeral/build/README.md new file mode 100644 index 00000000000..0ce32305265 --- /dev/null +++ b/deploy/docker-ephemeral/build/README.md @@ -0,0 +1,54 @@ +A makefile that uses docker.io, and qemu-user-static to build dependencies for our integration tests. + +Builds and uploadsdocker images for multiple architectures. Allows for '-j' to build multiple images at once. Uploads assume the hub.docker.com docker registry. + +# Setup + +## Docker + +Follow the instructions in [our dependencies file](doc/Dependencies.md) to ensure you have docker installed, and logged in. + +## qemu + +### Debian + + +```bash +apt-get install qemu-user-static +sudo service binfmt-support start +``` + +### Fedora + +'sudo dnf install -y qemu-user-static' + +# Using + +Assuming you have docker, and have followed the above instructions, "make build-all" should work. This builds all of the images, and places them in docker on the local machine. +to build an individual image (and it's dependent images), run "make-". to see a list of images that are buildable, run "make names". + +## Using with Dockerhub + +If you want to upload images to dockerhub, you must go to dockerhub, and create repositories under your user with the names of the images you want to upload. Again, to get the list of names buildable with this Makefile, type 'make names'. + +If you don't want to change the Makefile, add the DOCKER_USERNAME, DOCKER_EMAIL, and DOCKER_REALNAME environment variables. + +For instance, when I want to build all debian images, and upload them to dockerhub, i use: +```bash +make DIST=DEBIAN DOCKER_USERNAME=julialongtin DOCKER_EMAIL=julia.longtin@wire.com DOCKER_REALNAME='Julia Longtin' push-all +``` + +You can also push a single image (and it's dependencies) with "make push-". + +If you want your builds to go faster, and are good with having more garbled output, use the '-j' argument to make, to parallize the builds. + +By default this makefile builds and uploads the debian based images. Use the 'DIST=ALPINE' environment variable to build the alpine based images instead. + +# Troubleshooting: +## binfmt support: + +examine the following file, and ensure the 'flags:' line has an "F" flag on it: +cat /proc/sys/fs/binfmt_misc/qemu-arm | grep flags + +if it doesn't, try re-starting binfmt-support on debian. + diff --git a/deploy/docker-ephemeral/docker-compose.yaml b/deploy/docker-ephemeral/docker-compose.yaml index c15e885aae3..76d6f04aca3 100644 --- a/deploy/docker-ephemeral/docker-compose.yaml +++ b/deploy/docker-ephemeral/docker-compose.yaml @@ -6,7 +6,8 @@ networks: services: fake_dynamodb: container_name: demo_wire_dynamodb - image: cnadiminti/dynamodb-local:2018-04-11 +# image: cnadiminti/dynamodb-local:2018-04-11 + image: julialongtin/dynamodb_local:0.0.9 ports: - 127.0.0.1:4567:8000 networks: @@ -14,7 +15,8 @@ services: fake_sqs: container_name: demo_wire_sqs - image: airdock/fake-sqs:0.3.1 +# image: airdock/fake-sqs:0.3.1 + image: julialongtin/airdock_fakesqs:0.0.9 ports: - 127.0.0.1:4568:4568 networks: @@ -22,7 +24,8 @@ services: fake_localstack: container_name: demo_wire_localstack - image: localstack/localstack:0.8.0 # NB: this is younger than 0.8.6! +# image: localstack/localstack:0.8.0 # NB: this is younger than 0.8.6! + image: julialongtin/localstack:0.0.9 ports: - 127.0.0.1:4569:4579 # ses # needed for local integration tests - 127.0.0.1:4575:4575 # sns @@ -35,8 +38,8 @@ services: basic_smtp: # needed for demo setup container_name: demo_wire_smtp - # https://github.com/namshi/docker-smtp - image: namshi/smtp +# image: namshi/smtp + image: julialongtin/smtp:0.0.9 ports: - 127.0.0.1:2500:25 networks: @@ -44,7 +47,8 @@ services: fake_s3: container_name: demo_wire_s3 - image: minio/minio:RELEASE.2018-05-25T19-49-13Z +# image: minio/minio:RELEASE.2018-05-25T19-49-13Z + image: julialongtin/minio:0.0.9 ports: - "127.0.0.1:4570:9000" environment: @@ -59,6 +63,7 @@ services: # ports: # - "61613:61613" + # FIXME: replace redis image with one we build. redis: container_name: demo_wire_redis image: redis:3.0.7-alpine @@ -69,20 +74,33 @@ services: elasticsearch: container_name: demo_wire_elasticsearch - image: elasticsearch:5.6 + #image: elasticsearch:5.6 + image: julialongtin/elasticsearch:0.0.9-amd64 # https://hub.docker.com/_/elastic is deprecated, but 6.2.4 did not work without further changes. # image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4 ports: - "127.0.0.1:9200:9200" - "127.0.0.1:9300:9300" + environment: + - "bootstrap.system_call_filter=false" +# ES_JVM_OPTIONS is reserved, so... +# what's present in the jvm.options file by default. +# - "JVM_OPTIONS_ES=-Xmx2g -Xms2g" + - "JVM_OPTIONS_ES=-Xmx512m -Xms512m" + - "discovery.type=single-node" networks: - demo_wire cassandra: container_name: demo_wire_cassandra - image: cassandra:3.11.2 + #image: cassandra:3.11.2 + image: julialongtin/cassandra:0.0.9 ports: - "127.0.0.1:9042:9042" + environment: +# what's present in the jvm.options file by default. +# - "CS_JAVA_OPTIONS=-Xmx1024M -Xms1024M -Xmn200M" + - "CS_JVM_OPTIONS=-Xmx128M -Xms128M -Xmn50M" networks: - demo_wire @@ -146,6 +164,7 @@ services: networks: - demo_wire + # FIXME: replace aws_cli with an image that we build. aws_cli: image: mesosphere/aws-cli:1.14.5 depends_on: diff --git a/docs/developer/dependencies.md b/docs/developer/dependencies.md index fff116e191a..23cb5b2762d 100644 --- a/docs/developer/dependencies.md +++ b/docs/developer/dependencies.md @@ -149,6 +149,13 @@ _Note_: While it is possible to use non-docker solutions to set up and configure sudo apt install docker.io docker-compose ``` +After installing docker-io, add your user to the docker group, and restart your shell (usually involving a restart of your graphical environment). + +once you've logged in again, if you would like to upload any docker images (optional): +```bash +docker login --username= +```` + ### Generic: * [Install docker](https://docker.com) diff --git a/docs/reference/make-docker-and-qemu.md b/docs/reference/make-docker-and-qemu.md new file mode 100644 index 00000000000..c4ab362d27c --- /dev/null +++ b/docs/reference/make-docker-and-qemu.md @@ -0,0 +1,1072 @@ +# About this document: +This document is written with the goal of explaining https://github.com/wireapp/wire-server/pull/622 well enough that someone can honestly review it. :) + +In this document, we're going to rapidly bounce back and forth between GNU make, bash, GNU sed, Docker, and QEMU. + +# What does this Makefile do? Why was it created? + +To answer that, we're going to have to go back to Wire-Server, specifically, our integration tests. Integration tests are run locally on all of our machines, in order to ensure that changes we make to the Wire backend do not break currently existing functionality. In order to simulate the components that wire's backend depends on (s3, cassandra, redis, etc..), we use a series of docker images. These docker images are downloaded from dockerhub, are maintained (or not maintained) by outside parties, and are built by those parties. + +When a docker image is built, even if the docker image is something like a java app, or a pile of perl/node/etc, the interpreters (openjdk, node, perl) are embedded into the image. Those interpreters are compiled for a specific processor architecture, and only run on that architecture (and supersets of it). For instance, an AMD64 image will run on only an AMD64 system, but a 386 image will run on AMD64 since AMD64 is a superset of 386. Neither of those images will run on an ARM, like a Raspberry pi. + +This Makefile contains rules that allow our Mac users to build all of the docker images locally on their machine, with some minor improvements, which will save us about 2.5G of ram during integration tests. Additionally, it contains rules for uploading these images to dockerhub for others to use, and support for linux users to build images for arm32v5, arm32v7, 386, and AMD64, despite not being on these architectures. + +It builds non-AMD64 images on linux by using QEMU, a system emulator, to allow docker to run images that are not built for the architecture the system is currently running on. This is full system emulation, like many video game engines you're probably familiar with. You know how you have to throw gobs of hardware at a machine, to play a game written for a gaming system 20 years ago? This is similarly slow. To work around this, the Makefile is written in a manner that allows us to build many docker images at once, to take advantage of the fact that most of us have many processor cores lying around doing not-all-much. + +# What does this get us? + +To start with, the resulting docker images allow us to tune the JVM settings on cassandra and elasticsearch, resulting in lower memory consumption, and faster integration tests that don't impact our systems as much. Additionally, it allows us more control of the docker images we're depending on, so that another leftpad incident on docker doesn't impact us. As things stand, any of the developers of these docker images can upload a new docker image that does Very Bad Things(tm), and we'll gladly download and run it many times a day. Building these images ourselves from known good GIT revisions prevents this. Additionally, the multi-architecture approach allows us to be one step closer to running the backend on more esoteric systems, like a Raspberry pi, or an AWS A instance, both of which are built on the ARM architecture. Or, if rumour is to be believed, the next release of MacBook Pros. :) + +# Breaking it down: + +## Docker: + +to start with, we're going to have to get a bit into some docker architecture. We all have used docker, and pretty much understand the following workflow: + +I build a docker image from a Dockerfile and maybe some additions, I upload it to dockerhub, and other people can download and use the image. I can use the locally built image directly, without downloading it from dockerhub, and I can share the Dockerfile and additions via git, on github, and allow others to build the image. + +While this workflow works well for working with a single architecture, we're going to have to introduce some new concepts in order to support the multiple architecture way of building docker files. + +### Manifest files. + +Manifest files are agenerated by docker and contain references to multiple docker images, one for each architecture a given docker image has been built for. Each image in the manifest file is tagged with the architecture that the image is built for. + +Docker contains just enough built-in logic to interpret a manifest file on dockerhub, and download an image that matches the architecture that docker was built for. When using a manifest file, this is how docker determines what image to download. + +### A Manifest centric Workflow: + +If you're building a docker image for multiple architectures, you want a Manifest, so that docker automatically grabs the right image for the user's machine. This changes our workflow from earlier quite a bit: + +I build a docker image from a Dockerfile, and I build other images from slightly different versions of this Dockerfile (more on this later). I tag these images with a suffix, so that I can tell them apart. I upload the images to dockerhub, retaining the tags that differentiate the diffenent versions from each other. I create a manifest file, referring to the images that have been pushed to DockerHub, and upload the manifest file to DockerHub. People can download and use the image from dockerhub by refering to the tag of the manifest file. I can share the Dockerfile and additions via git, on dockerhub, and others can build their own images from it. + +#### What does this look like? + +All of us on the team are using AMD64 based machines, so in this example, we're going to build one image for AMD64, and one for it's predecessor architecture, I386. We're going to build the SMTP server image we depend on, from https://hub.docker.com/r/namshi/smtp. We're going to use a known safe git revision, and use some minor GNU sed to generate architecture dependent Dockerfiles from the Dockerfile in git. Everyone should be able to do this on your laptops. + +```bash +$ git clone https://github.com/namshi/docker-smtp.git smtp +Cloning into 'smtp'... +remote: Enumerating objects: 4, done. +remote: Counting objects: 100% (4/4), done. +remote: Compressing objects: 100% (4/4), done. +remote: Total 126 (delta 0), reused 0 (delta 0), pack-reused 122 +Receiving objects: 100% (126/126), 26.57 KiB | 269.00 KiB/s, done. +Resolving deltas: 100% (61/61), done. +$ cd smtp +$ git reset --hard 8ad8b849855be2cb6a11d97d332d27ba3e47483f +HEAD is now at 8ad8b84 Merge pull request #48 from zzzsochi/master +$ cat Dockerfile | sed "s/^\(MAINTAINER\).*/\1 Julia Longtin \"julia.longtin@wire.com\"/" | sed "s=^\(FROM \)\(.*\)$=\1i386/\2=" > Dockerfile-386 +$ cat Dockerfile | sed "s/^\(MAINTAINER\).*/\1 Julia Longtin \"julia.longtin@wire.com\"/" | sed "s=^\(FROM \)\(.*\)$=\1\2=" > Dockerfile-amd64 +$ docker build -t julialongtin/smtp:0.0.9-amd64 -f Dockerfile-amd64 + +$ docker build -t julialongtin/smtp:0.0.9-386 -f Dockerfile-386 + +$ docker push julialongtin/smtp:0.0.9-amd64 + +$ docker push julialongtin/smtp:0.0.9-386 +] 271.46K --.-KB/s in 0.07s + +2019-03-06 14:27:39 (3.65 MB/s) - ‘sash_3.8-5_armel.deb’ saved [277976/277976] +$ +``` + +This deb will not install on our machine, so we're going to manually take it apart, to get the sash binary out of it. + +```bash +$ mkdir tmp +$ cd tmp +$ ar x ../sash_3.8-5_armel.deb +$ ls + control.tar.xz data.tar.xz +$ tar -xf data.tar.gz +$ ls -la bin/sash +-rwxr-xr-x 1 demo demo 685348 Jun 9 2018 bin/sash +``` + +to verify what architecture this binary is built for, use the 'file' command. +```bash +$ file bin/sash +bin/sash: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), statically linked, for GNU/Linux 3.2.0, BuildID[sha1]=20641a8ca21b2c320ea7e6079ec88b857c7cbcfb, stripped +$ +``` + +now we can run this, and even run Arm64 programs that are on our own machine using it. +```bash +$ bin/sash +Stand-alone shell (version 3.8) +> file bin/sash +bin/sash: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), statically linked, for GNU/Linux 3.2.0, BuildID[sha1]=20641a8ca21b2c320ea7e6079ec88b857c7cbcfb, stripped +> ls +bin usr +> uname -a +Linux boxtop 4.9.0-8-amd64 #1 SMP Debian 4.9.144-3 (2019-02-02) x86_64 GNU/Linux +> whoami +demo +> +``` + +## QEMU, BinFmt, and Docker (Oh my!) + +After following the directions in the last two sections, you've created two docker images (one for i386, one for AMD64), created a manifest referring to them, set up for linux to load qemu and use it, and launched a binary for another architecture. + +Creating non-native docker images can now be done very similar to how i386 was done earlier. + +Because you are using a system emulator, your docker builds for non-x86 will be slower. additionally, the emulators are not perfect, so some images won't build. finally, code is just less tested on machines that are not an AMD64 machine, so there are generally more bugs. + +### Arm Complications: +The 32 bit version of arm is actually divided into versions, and not all linux distributions are available for all versions. arm32v5 and arm32v7 are supported by debian, while arm32v6 is supported by alpine. This variant must be specified during manifest construction, so to continue with our current example, these are the commands for tagging the docker images for our arm32v5 and arm32v7 builds of smtp: +```bash +$ docker manifest annotate julialongtin/smtp:0.0.9 julialongtin/smtp:0.0.9-arm32v5 --arch arm --variant 5 +$ docker manifest annotate julialongtin/smtp:0.0.9 julialongtin/smtp:0.0.9-arm32v7 --arch arm --variant 7 +``` + + +# Into the GNU Make Abyss + +Now that we've done all of the above, we should be capable of working with docker images independent of the architecture we're targeting. Now, into the rabit hole we go, automating everything with GNU Make + +## Why Make? +GNU make is designed to build targets by looking at the environment it's in, and executing a number of rules depending on what it sees, and what it has been requested to do. The Makefile we're going to look through does all of the above, along with making some minor changes to the docker images. It does this in parallel, calling as many of the commands at once as possible, in order to take advantage of idle cores. + +## Using the Makefile + +Before we take the Makefile apart, let's go over using it. + +This Makefile is meant to be used in four ways: building a set of images, pushing (and building) a set of images, building a single image. It follows the manifest workflow we documented earlier. + +By default, running 'make' in the same directory as the Makefile (assuming you've set all of the above up correctly) will attempt to build and push all of the docker images the makefile knows about to dockerhub. If you want this to work, you need to create a dockerhub account, use 'docker login' to log your local instance of docker in to dockerhub, then you need to create a repository for each docker image. + +To get a list of the names of the docker images this Makefile knows about, run 'make names'. +```bash +$ make names +Debian based images: +airdock_fakesqs airdock_rvm airdock_base smtp dynamodb_local cassandra +Alpine based images: +elasticsearch java_maven_node_python localstack minio +$ +``` + +The list of names is divided into two groups. one group is for images based on debian, and the other is for images based on alpine. This makefile can only build for one of these two distributions at once. + +Since no-one wants to click through dockerhub to create repositories, let's just build docker images locally, for now. + +Make looks at it's environment in order to decide what to do, so here are some environment variables that we're going to use. all of these variables have default values, so we're only going to provide a few of them. + +- `ARCHES`: the list of architectures we're going to attempt docker builds for. Mac users should supply "386 AMD64" to this, as they have no binfmt support. +- `DIST`: the distribution we're going to build for. this can be either DEBIAN or ALPINE. +- `DOCKER`_USERNAME: our username on dockerhub. +- `DOCKER`_EMAIL: Our email address, as far as dockerhub is concerned. +- `DOCKER`_REALNAME: again, our name string that will be displayed in DockerHub. +- `SED`: which sed binary to use. Mac users should install GSED, and pass the path to it in this variable. + +To build all of the debian based images locally on my machine, I run +```bash +make DIST=DEBIAN DOCKER_USERNAME=julialongtin DOCKER_EMAIL=julia.longtin@wire.com DOCKER_REALNAME='Julia Longtin' build-all -j". +``` + +What's the -j for? adding a '-j' to the command line causes make to execute in parallel. That's to say, it will try to build ALL of the images at once, taking care to build images that are dependencies of other images before building the images that depend on them. + +Note that since we are building the images without pushing them to DockerHub, no manifest files are generated. + +If we want to use these images in our docker compose, we can edit the docker compose file, and refer to the image we want with it's architecture suffix attached. This will make docker-compose use the local copy, instead of hitting DockerHub, grabbing the manifest, and using an image from there. for instance, to use the local cassandra image I just built, I would edit the docker-compose.yaml file in our wire-server repo, and make the cassandra section look like the following: + +``` + cassandra: + container_name: demo_wire_cassandra + #image: cassandra:3.11.2 + image: julialongtin/cassandra:0.0.9-amd64 + ports: + - "127.0.0.1:9042:9042" + environment: +# what's present in the jvm.options file by default. +# - "CS_JAVA_OPTIONS=-Xmx1024M -Xms1024M -Xmn200M" + - "CS_JVM_OPTIONS=-Xmx128M -Xms128M -Xmn50M" + networks: + - demo_wire +``` + +To remove all of the git repositories containing the Dockerfiles we download to build these images, we can run `make clean`. There is also the option to run `make cleandocker` to REMOVE ALL OF THE DOCKER IMAGES ON YOUR MACHINE. careful with that one. Note that docker makes good use of caching, so running 'make clean' and the same make command you used to build the images will complete really fast, as docker does not actually need to rebuild the images. + +## Reading through the Makefile + +OK, now that we have a handle on what it does, and how to use it, let's get into the Makefile itsself. + +A Makefile is a series of rules for performing tasks, variables used when creating those tasks, and some minimal functions and conditional structures. Rules are implemented as groups of bash commands, where each line is handled by a new bash interpreter. Personally, I think it 'feels functiony', only without a type system and with lots of side effects. Like if bash tried to be functional. + +### Variables + +#### Overrideable Variables +the make language has multiple types of variables and variable assignments. To begin with, let's look at the variables we used in the last step. +```bash +$ cat Makefile | grep "?=" +DOCKER_USERNAME ?= wireserver +DOCKER_REALNAME ?= Wire +DOCKER_EMAIL ?= backend@wire.com +TAGNAME ?= :0.0.9 +DIST ?= DEBIAN +LOCALARCH ?= $(call dockerarch,$(LOCALDEBARCH)) + ARCHES ?= $(DEBARCHES) + NAMES ?= $(DEBNAMES) + ARCHES ?= $(ALPINEARCHES) + NAMES ?= $(ALPINENAMES) +SED ?= sed +SMTP_COMMIT ?= 8ad8b849855be2cb6a11d97d332d27ba3e47483f +DYNAMODB_COMMIT ?= c1eabc28e6d08c91672ff3f1973791bca2e08918 +ELASTICSEARCH_COMMIT ?= 06779bd8db7ab81d6706c8ede9981d815e143ea3 +AIRDOCKBASE_COMMIT ?= 692625c9da3639129361dc6ec4eacf73f444e98d +AIRDOCKRVM_COMMIT ?= cdc506d68b92fa4ffcc7c32a1fc7560c838b1da9 +AIRDOCKFAKESQS_COMMIT ?= 9547ca5e5b6d7c1b79af53e541f8940df09a495d +JAVAMAVENNODEPYTHON_COMMIT ?= 645af21162fffd736c93ab0047ae736dc6881959 +LOCALSTACK_COMMIT ?= 645af21162fffd736c93ab0047ae736dc6881959 +MINIO_COMMIT ?= 118270d76fc90f1e54cd9510cee9688bd717250b +CASSANDRA_COMMIT ?= 064fb4e2682bf9c1909e4cb27225fa74862c9086 +``` + +The '?=' assignment operator is used to provide a default value. When earlier, we ran make as "make DIST=DEBIAN DOCKER_USERNAME=julialongtin DOCKER_EMAIL=julia.longtin@wire.com DOCKER_REALNAME='Julia Longtin' build-all -j", we were overriding those values. the Make interpreter will use values provided on the command line, or values we have used 'export' to place into our shell environment. + +LOCALARCH and the assignments for ARCHES and NAMES are a bit different. LOCALARCH is a function call, and the ARCHES and NAMES are emdedded in conditional statements. We'll cover those later. + +Note the block of COMMIT IDs. This is in case we want to experiment with newer releases of each of the docker images we're using. Fixing what we're using to a commit ID makes it much harder for an upstream source to send us malicious code. + +#### Non-Overrideable Variables +The following group of variables use a different assignment operator, that tells make not to look in the environment first. +```bash +$ cat Makefile | grep ":=" +USERNAME := $(DOCKER_USERNAME) +REALNAME := $(DOCKER_REALNAME) +EMAIL := $(DOCKER_EMAIL) +STRETCHARCHES := arm32v5 arm32v7 386 amd64 arm64v8 ppc64le s390x +JESSIEARCHES := arm32v5 arm32v7 386 amd64 +DEBARCHES := arm32v5 arm32v7 386 amd64 +JESSIENAMES := airdock_fakesqs airdock_rvm airdock_base smtp +STRETCHNAMES := dynamodb_local cassandra +DEBNAMES := $(JESSIENAMES) $(STRETCHNAMES) +ALPINEARCHES := amd64 386 arm32v6 +ALPINENAMES := elasticsearch java_maven_node_python localstack minio +PREBUILDS := airdock_rvm-airdock_base airdock_fakesqs-airdock_rvm localstack-java_maven_node_python +NOMANIFEST := airdock_rvm airdock_fakesqs localstack +LOCALDEBARCH := $(shell [ ! -z `which dpkg` ] && dpkg --print-architecture) +BADARCHSIM := localstack-arm32v6 java_maven_node_python-arm32v6 dynamodb_local-386 +$ +``` + +The first three variable assignments are referring to other variables. These basically exist as alias, to make our make rules denser later. + +STRETCHARCHES and JESSIEARCHES contain the list of architectures that dockerhub's debian stretch and jessie images provide. DEBARCHES defines what architectures we're going to build, for our debian targets. STRETCHARCHES and DEBIANARCHES only exist to make it visible to readers of the Makefile which images CAN be built for which architectures. + +JESSIENAMES and STRETCHNAMES are used similarly, only they are actually referred to by DEBNAMES, to provide the list of debian based images that can be built. + +ALPINEARCHES and ALPINENAMES work similarly, and are used when we've provided "DIST=ALPINE". We do not divide into seperate variables quite the same way as debian, because all of our alpine images are based on alpine 3.7. + +PREBUILDS contains our dependency map. essentially, this is a set of pairs of image names, where the first image mentioned depends on the second image. so, airdock_rvm depends on airdock_base, where airdock_fakesqs depends on airdock_rvm, etc. this means that our docker image names may not contain `-`s. Dockerhub allows it, but this makefile needed a seperator... and that's the one I picked. + +BADARCH is similar, pairing the name of an image with the architecture it fails to build on. This is so I can blacklist things that don't work yet. + +LOCALDEBARCH is a variable set by executing a small snippet of bash. The snippet makes sure dpkg is installed (the debian package manager), and uses dpkg to determine what the architecture of your local machine is. As you remember from when we were building docker images by hand, docker will automatically fetch an image that is compiled for your current architecture, so we use LOCALDEBARCH later to decide what architectures we need to fetch with a prefix or postfix, and which we can fetch normally. + +NOMANIFEST lists images that need a work-around for fetching image dependencies for specific architectures. You know how we added the name of the architecture BEFORE the image name in the dockerfiles? well, in the case of the dependencies of the images listed here, dockerhub isn't supporting that. DockerHub is supporting that form only for 'official' docker images, like alpine, debian, etc. as a result, in order to fetch an architecture specific version of the dependencies of these images, we need to add a - suffix. like -386 -arm32v7, etc. + +### Conditionals +We don't make much use of conditionals, but there are three total uses in this Makefile. let's take a look at them. + +In order to look at our conditionals (and many other sections of this Makefile later), we're going to abuse sed. If you're not comfortable with the sed shown here, or are having problems getting it to work, you can instead just open the Makefile in your favorite text editor, and search around. I abuse sed here for both brevity, and to encourage the reader to understand complicated sed commands, for when we are using them later IN the Makefile. + +SED ABUSE: +to get our list of conditionals out of the Makefile, we're going to use some multiline sed. specifically, we're going to look for a line starting with 'ifeq', lines starting with two spaces, then the line following. + +```bash +$ cat Makefile | sed -n '/ifeq/{:n;N;s/\n /\n /;tn;p}' +ifeq ($(LOCALARCH),) + $(error LOCALARCH is empty, you may need to supply it.) + endif +ifeq ($(DIST),DEBIAN) + ARCHES ?= $(DEBARCHES) + NAMES ?= $(DEBNAMES) +endif +ifeq ($(DIST),ALPINE) + ARCHES ?= $(ALPINEARCHES) + NAMES ?= $(ALPINENAMES) +endif +$ +``` + +There's a lot to unpack there, so let's start with the simple part, the conditionals. +The conditionals are checking for equality, in all cases. +First, we check to see if LOCALARCH is empty. This can happen if dpkg was unavailable, and the user did not supply a value on the make command line or in the user's bash environment. if that happens, we use make's built in error function to display an error, and break out of the Makefile. +The second and third conditionals decide on the values of ARCHES and NAMES. Earlier, we determined the default selection for DIST was DEBIAN, so this pair just allows the user to select ALPINE instead. note that the variable assignments in the conditionals are using the overrideable form, so the end user can override these on make's command line or in the user's environment. mac users will want to do this, since they don't have QEMU available in the same form, and are limited to building X86 and AMD64 architecture. + +Note that conditionals are evaluated when the file is read, once. This means that we don't have the ability to use them in our rules, or in our functions, and have to abuse other operations in 'functionalish' manners... + +Now, back to our sed abuse. +SED is a stream editor, and quite a powerful one. In this case, we're using it for a multi-line search. we're supplying the -n option, which squashes all output, except what sed is told specificly to print something with a command. +Let's look at each of the commands in that statement seperately. +```sed +# find a line that has 'ifeq' in it. +/ifeq/ +# begin a block of commands. every command in the block should be seperated by a semicolon. +{ +# create an anchor, that is to say, a point that can be branched to. +:n; +# Append the next line into the parameter space. so now, for the first block, the hold parameter space would include "ifeq ($(LOCALARCH),)\n $(error LOCALARCH is empty, you may need to supply it.)". +N; +# Replace the two spaces in the parameter space with one space. +s/\n /\n /; +# If the previous 's' command found something, and changed something, go to our label. +tn; +# print the contents of the parameter space. +p +# close the block of commands. +} +``` +... Simple, right? + +note that the contents above can be stored to a file, and run with sed's "-f" command, for more complicated sed scripts. Sed is turing complete, so... things like tetris have been written in it. My longest sed scripts do things like sanity check OS install procedures, or change binaryish protocols into xmlish forms. + +### Functions +Make has a concept of functions, and the first two functions we use are a bit haskell inspired. + +SED ABUSE: +To get a list of the functions in our makefile, we're going to use a bit more traditional sed. specifically, we're going to look for lines that start with a number of lowercase characters that are immediately followed by an '=' sign. + +```bash +$ cat Makefile | sed -n '/^[a-z]*=/p' +dockerarch=$(patsubst i%,%,$(patsubst armel,arm32v5,$(patsubst armhf,arm32v7,$(patsubst arm64,arm64v8,$(1))))) +fst=$(word 1, $(subst -, ,$(1))) +snd=$(word 2, $(subst -, ,$(1))) +goodarches=$(filter-out $(call snd,$(foreach arch,$(ARCHES),$(filter $(1)-$(arch),$(BADARCHSIM)))),$(ARCHES)) +nodeps=$(filter-out $(foreach target,$(NAMES),$(call snd,$(foreach dependency,$(NAMES),$(filter $(target)-$(dependency),$(PREBUILDS))))),$(NAMES)) +maniarch=$(patsubst %32,%,$(call fst,$(subst v, ,$(1)))) +manivariant=$(foreach variant,$(word 2, $(subst v, ,$(1))), --variant $(variant)) +archpostfix=$(foreach arch,$(filter-out $(filter-out $(word 3, $(subst -, ,$(filter $(call snd,$(1))-%-$(call fst,$(1)),$(foreach prebuild,$(PREBUILDS),$(prebuild)-$(call fst,$(1)))))),$(LOCALARCH)),$(call fst,$(1))),-$(arch)) +archpath=$(foreach arch,$(patsubst 386,i386,$(filter-out $(LOCALARCH),$(1))),$(arch)/) +$ +``` + +These are going to be a bit hard to explain in order, especially since we haven't covered where they are being called from. Let's take them from simplest to hardest, which happens to co-incide with shortest, to longest. + +The fst and snd functions are what happens when a haskell programmer is writing make. You remember all of the pairs of values earlier, that were seperated by a single '-' character? these functions return either the first, or the second item in the pair. Let's unpack 'fst'. +fst uses the 'word' function of make to retrieve the first word from "$(subst -, ,$(1))". the 'subst' function substitutes a single dash for a single space. this seperates a - pair into a space seperated string. $(1) is the first argument passed to this function. +snd works similarly, retrieving from our pair. + +The next easiest to explain function is 'maniarch'. It returns the architecture string that we use when annotating a docker image. When we refer to an architecture, we use a string like 'amd64' or 'arm32v6', but docker manifest wants just 'arm' 'amd64' or '386'. +maniarch first uses the 'patsubst' command to replace "anystring32" with "anystring". this removes the 32 from arm32. It's given the result of $(call fst,$(subst v, ,$(1)))) as a string to work with. +$(call fst,$(subst v, ,$(1)))) calls our 'fst' function, giving it the result of us substituting 'v' for ' ' in the passed in argument. in the case of arm32v6, it seperates the string into "arm32 6". Note that instead of calling fst, we could have just used 'word 1' like we did in fst. This is a mistake on my part, but it works regardless, because of the way fst is built. as before, $(1) is the argument passed into our function. + +manivariant has a similar function to maniarch. It's job is to take an architecture name (amd64, arm32v5, etc...), and if it has a 'v', to return the '--variant ' command line option for our 'docker manifest anotate'. +manivariant starts by using make's 'foreach' function. this works by breaking it's second argument into words, storing them into the variable name given in the first argument, and then generating text using the third option. this is a bit abusive, as we're really just using it as "if there is a variant, add --variant " structure. +The first argument of foreach is the name of a variable. we used 'variant' here. the second argument in this case properly uses word, and subst to return only the content after a 'v' in our passed in argument, or emptystring. the third option is ' --variant $(variant)', using the variable defined in the first parameter of foreach to create " --variant 5" if this is passed "arm32v5", for instance. + +archpath is similar in structure to manivariant. In order to find a version of a docker image that is appropriate for our non-native architectures, we have to add the 'archname/' string to the path to the image we're deriving from, in our Dockerfile. This function returns that string. We start by using foreach in a similar method as manivariant, to only return a string if the second argument to foreach evaluates to content. In our second argument, we begin by performing a patsubst, replacing a '386' with an 'i386' if it's found in the patsubst argument. This is because on dockerhub, official images of different architectures are actually stored in a series of machine maintained accounts, and an account name can't start with a number. therefore, 386 images are stored under a user called 'i386'. As an argument to the patsubst, we're providing our first usage of filter-out. it's used here so that if the local architecture was supplied to this function, the string will return empty in section 2 of our foreach, and therefore the third section won't even be evaluated. + +our next function to explain is 'goodarches'. This function is passed an image name, and returns all of the arches from our architecture list that that image can be built for. It basically searches BADARCHSIM from earlier, and removes an architecture from the returned copy of ARCHES if a - pair for that architecture exists. We use filter-out to remove anything returned from it's second argument from the ARCHES list we provide as it's third argument. The second argument to filter-out uses snd to seperate the architecture name from a string found, and uses foreach and filter to search BADARCHSIM for all possible combinations between the passed in image name, and all of the architectures. + +dockerarch is a bit simpler than the last few. it takes the debian architecture name, replacing it with the docker architecture name, using a series of nested patsubst substititions. + +Unlike our earlier functions, nodeps does not require an argument. It's function is to return a list of images from NAMES that do not have any dependencies. To do this, we start with a filter-out of NAMES, then use a pair of nested foreach statements, both searching through NAMES, and constructing all combinations of -. This value is looked for in PREBUILDS, and if a combination is found, we use snd to return the dependency to filter-out. this is probably overly complicated, and can likely be shortened by the use of patsubst. "it works, ship it." + +Finally, we get to archpostfix. archpostfix has a similar function to archpath, only it provides a - for the end of the image path if the DEPENDENCY of this image is not an official image, and therefore can not be found by adding an 'arch/' postfix. This is long, and probably also a candidate for shortening. Reading your way through this one is an exercise for when the reader wants a reverse polish notation headache. + + +To summarize the Make functions we've (ab)used in this section: +``` +$(word 1,string of words) # return the Nth word in a space separated string. +$(subst -, ,string-of-words) # replace occurances of '-' with ' ' in string. +$(patsubst string%,%string) # replace a patern with another patern, using % as a single wildcard. +$(call function,argument) # function calling. +$(foreach var,string,$(var)) # iterate on a space seperated string, evaluating the last argument with var set to each word in string. +$(filter word,word list) # return word if it is found in word list. +$(filter-out word, word list) # return word list without word. +``` + +Now after all of that, let's go through the SED command we last used. Remember that? +```bash +$ cat Makefile | sed -n '/^[a-z]*=/p' +``` +Again, we're going to use sed in '-n' mode, supressing all output except the output we are searching for. /PATTERN/ searches the lines of the input for a pattern, and if it's found, the command afterward is executed, which is a 'p' for print, in this case. the patern given is '^[a-z]*='. The '^' at the beginning means 'look for this patern at the beginning of the line, and the '=' at the end is the equal sign we were looking for. '[a-z]*' is us using a character class. character classes are sedspeak for sets of characters. they can be individually listed, or in this case, be a character range. the '*' after the character class just means "look for these characters any number of times". technically, that means a line starting in '=' would work (since zero is any number of times), but luckily, our file doesn't contain lines starting with =, as this is not valid make syntax. + +### Rules. + +Traditionally, makefiles are pretty simple. they are used to build a piece of software on your local machine, so you don't have to memorize all of the steps, and can type 'make', and have it just done. A simple Makefile looks like the following: +```make +CC=gcc +CFLAGS=-I. +DEPS = hellomake.h + +%.o: %.c $(DEPS) + $(CC) -c -o $@ $< $(CFLAGS) + +hellomake: hellomake.o hellofunc.o + $(CC) -o hellomake hellomake.o hellofunc.o + +clean: + rm hellomake hellomake.o hellofunc.o +``` +This example Makefile has some variables, and rules, that are used to build a C program into an executable, using GCC. + +Our Makefile is much more advanced, necessatating this document, to ensure maintainability. + + +A single make rule is divided into three sections: what you want to build, what you need to build first, and the commands you run to build the thing in question: +```make +my_thing: things I need first + bash commands to build it + +target: prerequisites + recipe line 1 +``` + +The commands to build a thing (recipe lines) are prefaced with a tab character, and not spaces. Each line is executed in a seperate shell instance. + + +#### The roots of the trees + +In the section where we showed you how to use our Makefile, we were calling 'make' with an option, such as push-all, build-smtp, names, or clean. We're now going to show you the rules that implement these options. + +SED ABUSE: +This time, we're going to add the -E command to sed. this kicks sed into the 'extended regex' mode, meaning for our purposes, we don't have to put a \ before a ( or a ) in our regex. we're then going to use a patern grouping, to specify that we want either the clean or names rules. we're also going to swap the tabs for spaces, to prevent our substitution command from always matching, and not even visibly change the output. total cheating. +```bash +$ cat Makefile | sed -n -E '/^(clean|names)/{:n;N;s/\n\t/\n /;tn;p}' +clean: + rm -rf elasticsearch-all airdock_base-all airdock_rvm-all airdock_fakesqs-all cassandra-all $(DEBNAMES) $(ALPINENAMES) + +cleandocker: + docker rm $$(docker ps -a -q) || true + docker rmi $$(docker images -q) --force || true + +names: + @echo Debian based images: + @echo $(DEBNAMES) + @echo Alpine based images: + @echo $(ALPINENAMES) +``` + +Most Makefiles change their environment. Having changed their environment, most users want a quick way to set the environment back to default, so they can make changes, and build again. to enable this, as a convention, most Makefiles have a 'clean' rule. Ours remove the git repos that we build the docker images from. note the hardcoded list of '-all' directories: these are the git repos for images where the git repo does not simply have a Dockerfile at the root of the repo. In those cases, our rules that check out the repos check them out to -all, then do Things(tm) to create a /Dockerfile. + +cleandocker is a rule I use on my machine, when docker images have gotten out of control. it removes all of the docker images on my machine, and is not meant to be regularly run. + +names displays the names of the images this Makefile knows about. It uses a single @ symbol at the beginning of the rules. this tells make that it should NOT display the command that make is running, when make runs it. + +OK, that covers the simple make rules, that have no dependencies, or parameters. Now let's take a look at our build and push rules. these are the 'top' of a dependency tree, which is to say they depend on things, that depend on things... that do the think we've asked for. + +```bash +$ cat Makefile | sed -n -E '/^(build|push|all)/{:n;N;s/\n\t/\n /;tn;p}' +all: $(foreach image,$(nodeps),manifest-push-$(image)) + +build-%: $$(foreach arch,$$(call goodarches,%),create-$$(arch)-$$*) + @echo -n + +build-all: $(foreach image,$(nodeps),build-$(image)) + +push-%: manifest-push-% + @echo -n + +push-all: $(foreach image,$(nodeps),manifest-push-$(image)) +$ +``` + +Lets take these simplest to most complex. + +push-% is the rule called when we run 'make push-'. It depends on manifest-push-%, meaning that make will take whatever you've placed after the 'push-', look for a rule called manifest-push-, and make sure that rule completes, before trying to execute this rule. Executing this rule just executes nothing, and in reality, the '@echo -n" exists to allow the push-% rule to be executed. By default, make considers wildcard rules as phony, meaning they cannot be called from the command line, and must be called from a rule with no wildcarding. + +push-all is allowed to have no commands, because it's name contains no wildcard operator. In it's dependency list, we're using a foreach loop to go through our list of images that have no dependencies, and ask for manifest-push- to be built. + +all is identical to push-all. I could have just depended on push-all, and saved some characters here. + +build-all operates similar to push-all, only it asks for build- to be built for all of the no-dependency images. + +build-% combines the approach of push-% and build-all. It uses foreach to request the build of create--, which builds one docker image for each architecture that we know this image will build on. This is our first exposure to $$ structures, so let's look at those a bit. + +By default, make allows for one % in the build-target, and one % in the dependencies. it takes what it matches the % against in the build-target, and substitutes the first % found in the dependency list with that content. so, what do you do if you need to have the thing that was matched twice in the dependency list? enter .SECONDEXPANSION. + +```bash +$ cat Makefile | sed -n -E '/^(.SECOND)/{:n;N;s/\n\t/\n /;tn;p}' | less +.SECONDEXPANSION: + +``` + +.SECONDEXPANSION looks like a rule, but really, it's a flag to make, indicating that dependency lists in this Makefile should be expanded twice. During the first expansion, things will proceed as normal, and everything with two dollar signs will be ignored. during the second expansion things that were delayed by using two dollar signs are run, AND a set of variables that is normally available in the 'recipe' section. In the case we're looking at, this means that during the first expansion, only the "%" character will be interpreted. during the second expansion the foreach and call will actually be executed, and the $$* will be expanded the same way as $* will be in the recipe section, namely, exactly identical to the % expansion in the first expansion. This effectively gives us two instances of %, the one expanded in the first expansion, and $$* expanded in the second expansion. + +build-% also uses the same 'fake recipe' trick as push-%, that is, having a recipe that does nothing, to trick make into letting you run this. + +#### One Level Deeper + +The rules you've seen so far were intended for user interaction. they are all rules that the end user of this Makefile picks between, when deciding what they want this makefile to do. Let's look at the rules that these depend on. + +```bash +$ cat Makefile | sed -n -E '/^(manifest-push)/{:n;N;s/\n\t/\n /;tn;p}' +manifest-push-%: $$(foreach arch,$$(call goodarches,$$*), manifest-annotate-$$(arch)-$$*) + docker manifest push $(USERNAME)/$*$(TAGNAME) + +$ +``` + +manifest-push-% should be relatively simple for you now. the only thing new here, is you get to see $* used in the construction of our docker manifest push command line. Let's follow the manifest creation down a few more steps. + +```bash +$ cat Makefile | sed -n -E '/^(manifest-ann|manifest-crea)/{:n;N;s/\n\t/\n /;tn;p}' +manifest-annotate-%: manifest-create-$$(call snd,$$*) + docker manifest annotate $(USERNAME)/$(call snd,$*)$(TAGNAME) $(USERNAME)/$(call snd,$*)$(TAGNAME)-$(call fst,$*) --arch $(call maniarch,$(call fst,$*)) $(call manivariant,$(call fst,$*)) + +manifest-create-%: $$(foreach arch,$$(call goodarches,%), upload-$$(arch)-$$*) + docker manifest create $(USERNAME)/$*$(TAGNAME) $(patsubst %,$(USERNAME)/$*$(TAGNAME)-%,$(call goodarches,$*)) --amend + +``` + +manifest-push depends on manifest-annotate, which depends on manifest-create, that depends on upload-... so when make tries to push a manifest, it makes sure an image has been uploaded, then creates a manifest, then annotates the manifest. We're basically writing rules for each step of our manifest, only backwards. continuing this pattern, the last thing we will depend on will be the rules that actually download the dockerfiles from git. + +#### Dependency Resolving + +We've covered the entry points of this Makefile, and the chained dependencies that create, annotate, and upload a manifest file. now, we get into two seriously complicated sets of rules, the upload rules and the create rules. These accomplish their tasks of uploading and building docker containers, but at the same time, they accomplish our dependency resolution. Let's take a look. + +```bash +$ cat Makefile | sed -n -E '/^(upload|create|my-|dep)/{:n;N;s/\n\t/\n /;tn;p}' + +upload-%: create-% $$(foreach predep,$$(filter $$(call snd,%)-%,$$(PREBUILDS)), dep-upload-$$(call fst,$$*)-$$(call snd,$$(predep))) + docker push $(USERNAME)/$(call snd,$*)$(TAGNAME)-$(call fst,$*) | cat + +dep-upload-%: create-% $$(foreach predep,$$(filter $$(call snd,%)-%,$$(PREBUILDS)), dep-subupload-$$(call fst,$$*)-$$(call snd,$$(predep))) + docker push $(USERNAME)/$(call snd,$*)$(TAGNAME)-$(call fst,$*) | cat + +dep-subupload-%: create-% + docker push $(USERNAME)/$(call snd,$*)$(TAGNAME)-$(call fst,$*) | cat + +create-%: Dockerfile-$$(foreach target,$$(filter $$(call snd,$$*),$(NOMANIFEST)),NOMANIFEST-)$$* $$(foreach predep,$$(filter $$(call snd,%)-%,$(PREBUILDS)), depend-create-$$(call fst,$$*)-$$(call snd,$$(predep))) + cd $(call snd,$*) && docker build -t $(USERNAME)/$(call snd,$*)$(TAGNAME)-$(call fst,$*) -f Dockerfile-$(call fst,$*) . | cat + +depend-create-%: Dockerfile-$$(foreach target,$$(filter $$(call snd,$$*),$(NOMANIFEST)),NOMANIFEST-)$$* $$(foreach predep,$$(filter $$(call snd,%)-%,$(PREBUILDS)), depend-subcreate-$$(call fst,$$*)-$$(call snd,$$(predep))) + cd $(call snd,$*) && docker build -t $(USERNAME)/$(call snd,$*)$(TAGNAME)-$(call fst,$*) -f Dockerfile-$(call fst,$*) . | cat + +depend-subcreate-%: Dockerfile-$$(foreach target,$$(filter $$(call snd,$$*),$(NOMANIFEST)),NOMANIFEST-)$$* + cd $(call snd,$*) && docker build -t $(USERNAME)/$(call snd,$*)$(TAGNAME)-$(call fst,$*) -f Dockerfile-$(call fst,$*) . | cat + +$ +``` + +First, let's tackle the roles of these rules. the *upload* rules are responsible for running docker push, while the *create* rules are responsible for running docker build. All of the upload rules depend on the first create rule, to ensure what they want to run has been built. + +these rules are setup in groups of three: + +upload-% and create-% form the root of these groups. upload-% depends on create-%, and create-% depends on the creation of a Dockerfile for this image, which is the bottom of our dependency tree. + +upload-%/create-% depend on two rules: dep-upload-%/depend-create-%, which handle the upload/create for the image that THIS image depends on. There are also dep-subupload-% and dep-subcreate-% rules, to handle the dependency of the dependency of this image. + +This dependency-of, and dependency-of-dependency logic is necessary because Make will not let us run a recursive rule: no rule can be in one branch of the dependency graph more than once. so instead, the root of our dependency tree either starts with a single image, or with a list of images that are the root of their own dependency graphs. + + +Now let's look at the rules themselves. +upload-% has a dependency on create-%, to ensure what it wantas to upload already exists. additionally, it has a dependency that uses foreach and filter to look through the list of PREBUILDS, and depend on dep-upload-- for any images this image depends on. + +dep-upload-% is virtually identical to upload-%, also searching through PREBUILDS for possible dependencies, and depending on dep-subupload to build them. + +dep-subupload does no dependency search, but has an identical docker push recipe to upload, and dep-upload. + +create-%, depend-create-%, and depend-subcreate-% work similarly to the upload rules, calling docker build instead of a docker push, and depending on the Dockerfile having been created. When depending on the Dockerfile, we look through the NOMANIFEST list, and insert "NOMANIFEST-" in the name of dependency on the dockerfile. This is so that we depend on the NOMANIFEST variant if the image we are building requires us to use a postfix on the image name to access a version for a specified architecture. otherwise, we run the Dockerfile-% rule that uses a prefix (i386/, amd64/, etc) to access the docker image we are building from. + +It's worth noting that for all of these *create* and *upload* rules, we pipe the output of docker to cat, which causes docker to stop trying to draw progress bars. This seriously cleans up the + + +#### Building Dockerfiles. + +There are two rules for creating Dockerfiles, and we decide in the *create* rules which of these to use by looking at the NOMANIFEST variable, and adding -NOMANIFEST in the name of the rule we depend on for dockerfile creation. + +The rules are relatively simple: +```bash +$ cat Makefile | sed -n -E '/^(Dock)/{:n;N;s/\n\t/\n /;tn;p}' +Dockerfile-NOMANIFEST-%: $$(call snd,%)/Dockerfile + cd $(call snd,$*) && cat Dockerfile | ${SED} "s/^\(MAINTAINER\).*/\1 $(REALNAME) \"$(EMAIL)\"/" | ${SED} "s=^\(FROM \)\(.*\)$$=\1\2$(call archpostfix,$*)=" > Dockerfile-$(call fst,$*) + +Dockerfile-%: $$(call snd,%)/Dockerfile + cd $(call snd,$*) && cat Dockerfile | ${SED} "s/^\(MAINTAINER\).*/\1 $(REALNAME) \"$(EMAIL)\"/" | ${SED} "s=^\(FROM \)\(.*\)$$=\1$(call archpath,$(call fst,$*))\2=" > Dockerfile-$(call fst,$*) +$ +``` + +These two rules depend on the checkout of the git repos containing the Dockerfiles. they do this by depending on /Dockerfile. The rules are responsible for the creation of individual architecture specific derivitives of the Dockerfile that is downloaded. additionally, the rules set the MAINTAINER of the docker image to be us. Most of the heavy lifting of these rules is being done in the archpostfix, and archpath functions, which are being used in a sed expression to either postfix or prefix the image that this image is built from. + + +Let's take a look at that sed with a simpler example: + +SED ABUSE: +```bash +$ echo "FROM image-version" | sed "s=^\(FROM \)\(.*\)$=\1i386/\2=" +FROM i386/image-version +$ +``` + +Unlike our previous sed commands, which have all been forms of "look for this thing, and display it", with the 's' command basically being abused as a test, this one intentionally is making a change. + +'s' commands are immediately followed by a character, that is used to seperate and terminate two blocks of text: the part we're looking for (match section), and the part we're replacing it with(substitution section). Previously, we've used '/' as the character following a 's' command, but since we're using '/' in the text we're placing into the file, we're going to use the '=' character instead. We've covered the '^' character at the beginning of the pattern being an anchor for "this pattern should be found only at the begining of the line". In the match section of this command, we're introducing "$" as the opposite anchor: $ means "end of line.". we're not using a -E on the command line, so are forced to use "\" before our parenthesis for our matching functions. this is a pure stylistic decision. the .* in the second matching section stands for 'any character, any number of times', which will definately match against our dependent image name. + +The match section of this sed command basicaly translates to "at the beginning of the line, look for "FROM ", store it, and store anything else you find up to the end of the line.". These two store operations get placed in sed variables, named \1, and \2. a SED command can have up to nine variables, which we are using in the substitution section. + +The substitution section of this sed command uses the \1 and \2 variable references to wrap the string "i386/". this effectively places i386/ in front of the image name. + +Because we are using that sed command in a Makefile, we have to double up the "$" symbol, to prevent make from interpreting it as a variable. In the first sed command in these rules, we're also doing some minor escaping, adding a '\' in front of some quotes, so that our substitution of the maintainer has quotes around the email address. + +#### Downloading Dockerfiles + +Finally, we are at the bottom of our dependency tree. We've followed this is reverse order, but when we actually ask for things to be pushed, or to be built, these rules are the first ones run. + +There are a lot of these, of various complexities, so let's start with the simple ones first. + +##### Simple Checkout + +```bash +$ cat Makefile | sed -n -E '/^(smtp|dynamo|minio)/{:n;N;s/\n\t/\n /;tn;p}' +smtp/Dockerfile: + git clone https://github.com/namshi/docker-smtp.git smtp + cd smtp && git reset --hard $(SMTP_COMMIT) + +dynamodb_local/Dockerfile: + git clone https://github.com/cnadiminti/docker-dynamodb-local.git dynamodb_local + cd dynamodb_local && git reset --hard $(DYNAMODB_COMMIT) + +minio/Dockerfile: + git clone https://github.com/minio/minio.git minio + cd minio && git reset --hard $(MINIO_COMMIT) + +``` + +These rules are simple. They git clone a repo, then reset the repo to a known good revision. This isolates us from potential breakage from upstreams, and prevents someone from stealing git credentials for our upstreams, and using those credentials to make a malignant version. + +##### Checkout with Modifications + +A bit more complex rule is localstack/Dockerfile: +```bash +$ cat Makefile | sed -n -E '/^(localsta)/{:n;N;s/\n\t/\n /;tn;p}' +localstack/Dockerfile: + git clone https://github.com/localstack/localstack.git localstack + cd localstack && git reset --hard $(LOCALSTACK_COMMIT) + ${SED} -i.bak "s=localstack/java-maven-node-python=$(USERNAME)/java_maven_node_python$(TAGNAME)=" $@ + # skip tests. they take too long. + ${SED} -i.bak "s=make lint.*=make lint=" localstack/Makefile + ${SED} -i.bak "s=\(.*lambda.*\)=#\1=" localstack/Makefile + +$ +``` + +This rule makes some minor modifications to localstack's Dockerfile, and the Makefile that localstack's build process places in the docker image. It changes the Dockerfile such that instead of depending on upstream's version of the java-maven-node-python docker image, we depend on the version we are building. additionally, we disable the test cases for localstack, because they take a long time, and have a timing issues on emulators. It's worth noting that we use the make "$@" variable here, which evaluates to the build target, AKA, everything to the left of the ":" on the first line of our rule. + +SED ABUSE: +These have a little bit of new sed, for us. We're using the '-i' option to sed, to perform sed operations in place, which is to say, we tell sed to edit the file, and store a backup of the file before it edited it as .bak. Other than that, these are standard substitutions, like we covered in our previous SED ABUSE section. + +In the same approximate category is the java_maven_node_python/Dockerfile rule: +```bash +$ cat Makefile | sed -n -E '/^(java)/{:n;N;s/\n\t/\n /;tn;p}' +java_maven_node_python/Dockerfile: + git clone https://github.com/localstack/localstack.git java_maven_node_python + cd java_maven_node_python && git reset --hard $(JAVAMAVENNODEPYTHON_COMMIT) + cd java_maven_node_python && mv bin/Dockerfile.base Dockerfile + # disable installing docker-ce. not available on many architectures in binary form. + ${SED} -i.bak "/.*install Docker.*/{N;N;N;N;N;d}" $@ +``` + +This rule does a checkout like the localstack rule, but the Dockerfile is stored somewhere other that the root of the repo. we move the Dockerfile, then we disable the installation of docker-ce in the environment. we don't use it, and it's got problems with not being ported to all architectures. + +SED ABUSE: +To disable the installation of docker here, we do something a bit hacky. we find the line with 'install Docker' on it, we pull the next 5 lines into the pattern buffer, then delete them. This is effectively just a multiline delete. we use the -i.bak command line, just like the last sed abuse. neat and simple. + + +##### Checkout, Copy, Modify + +Some of the git repositories that we depend on do not store the Dockerfile in the root of the repository. instead, they have one large repository, with many directories containing many docker images. In these cases, we use git to check out the repository into a directory with the name of the image followed by '-all', then copy the directory we want out of the tree. + +```bash +$ cat Makefile | sed -n -E '/^(airdock)/{:n;N;s/\n\t/\n /;tn;p}' +airdock_base/Dockerfile: + git clone https://github.com/airdock-io/docker-base.git airdock_base-all + cd airdock_base-all && git reset --hard $(AIRDOCKBASE_COMMIT) + cp -R airdock_base-all/jessie airdock_base + # work around go compiler bug by using newer version of GOSU. https://bugs.launchpad.net/qemu/+bug/1696353 + ${SED} -i.bak "s/GOSU_VERSION=.* /GOSU_VERSION=1.11 /" $@ + # work around missing architecture specific binaries in earlier versions of tini. + ${SED} -i.bak "s/TINI_VERSION=.*/TINI_VERSION=v0.16.1/" $@ + # work around the lack of architecture usage when downloading tini binaries. https://github.com/airdock-io/docker-base/issues/8 + ${SED} -i.bak 's/tini\(.asc\|\)"/tini-\$$dpkgArch\1"/' $@ + +airdock_rvm/Dockerfile: + git clone https://github.com/airdock-io/docker-rvm.git airdock_rvm-all + cd airdock_rvm-all && git reset --hard $(AIRDOCKRVM_COMMIT) + cp -R airdock_rvm-all/jessie-rvm airdock_rvm + ${SED} -i.bak "s=airdock/base:jessie=$(USERNAME)/airdock_base$(TAGNAME)=" $@ + # add a second key used to sign ruby to the dockerfile. https://github.com/airdock-io/docker-rvm/issues/1 + ${SED} -i.bak "s=\(409B6B1796C275462A1703113804BB82D39DC0E3\)=\1 7D2BAF1CF37B13E2069D6956105BD0E739499BDB=" $@ + +airdock_fakesqs/Dockerfile: + git clone https://github.com/airdock-io/docker-fake-sqs.git airdock_fakesqs-all + cd airdock_fakesqs-all && git reset --hard $(AIRDOCKFAKESQS_COMMIT) + cp -R airdock_fakesqs-all/0.3.1 airdock_fakesqs + ${SED} -i.bak "s=airdock/rvm:latest=$(USERNAME)/airdock_rvm$(TAGNAME)=" $@ + # add a workdir declaration to the final switch to root. + ${SED} -i.bak "s=^USER root=USER root\nWORKDIR /=" $@ + # break directory creation into two pieces, one run by root. + ${SED} -i.bak "s=^USER ruby=USER root=" $@ + ${SED} -i.bak "s=cd /srv/ruby/fake-sqs.*=chown ruby.ruby /srv/ruby/fake-sqs\nUSER ruby\nWORKDIR /srv/ruby/fake-sqs\nRUN cd /srv/ruby/fake-sqs \&\& \\\\=" $@ +``` + +In airdock_base/Dockefile, we do a clone, set it to the revision we are expecting, then copy out one directory from that repo, creating an airdock_base/ directory containing a Dockerfile, like we expect. We then change out some version numbers in the Dockerfile to work around some known bugs, and do a minor modification to two commands to allow airdock_base to be built for non-amd64 architectures. + +SED ABUSE: +The sed in the airdock_base/Dockerfile rule is relatively standard fare for us now, with the exception of the last command. in it, we use a match against "\(.asc\|\)", meaning either a .asc, or empty string. This lets this sed command modify both the line that contains the path to the signature for tini, and the path to the tini package. Since we want a '$' in the dockerfile, so that when the dockerfile is run, it looks at it's internal '$dpkgArch' variable, we have to escape it with a $ to prevent make from eating it, and with a \ to prevent SED from trying to interpret it. + +In airdock_rvm/Dockerfile, we do the same clone, reset hard, copy routine as we did in airdock_base/Dockerfile. Since airdock_rvm depends on airdock_base, we change the image this image derives from to point to our airdock_base image. Additionally, to work around the image using an old signature to verify it's ruby download, we add another key to the gpg import line in the Dockerfile. Technically both keys are in use by the project now, so we did not remove the old one. + +airdock_fakesqs had a bit more modification that was required. we follow the same routine as in airdock_rvm/Dockerfile, doing our clone, reset, copy, and dependant image change, then we have to make some modifications to the WORKDIR and USERs in this Dockerfile. I don't know how they successfully build it, but it looks to me like their original file is using a different docker file interpreter, with a different permissions model. when we tried to run the Dockerfile, it would give us permissions errors. These changes make it function, by being a bit more explicit about creating things with the right permissions. + +SED ABUSE: +Let's take a look at the effect of these sed commands, before we dig into the commands themselves. + +```bash +$ diff -u airdock_fakesqs-all/0.3.1/Dockerfile airdock_fakesqs/Dockerfile +--- airdock_fakesqs-all/0.3.1/Dockerfile 2019-03-11 16:47:40.367319559 +0000 ++++ airdock_fakesqs/Dockerfile 2019-03-11 16:47:40.419320902 +0000 +@@ -4,15 +4,19 @@ + # TO_BUILD: docker build --rm -t airdock/fake-sqs . + # SOURCE: https://github.com/airdock-io/docker-fake-sqs + +-FROM airdock/rvm:latest ++FROM julialongtin/airdock_rvm:0.0.9 + MAINTAINER Jerome Guibert + ARG FAKE_SQS_VERSION=0.3.1 +-USER ruby ++USER root + +-RUN mkdir -p /srv/ruby/fake-sqs && cd /srv/ruby/fake-sqs && \ ++RUN mkdir -p /srv/ruby/fake-sqs && chown ruby.ruby /srv/ruby/fake-sqs ++USER ruby ++WORKDIR /srv/ruby/fake-sqs ++RUN cd /srv/ruby/fake-sqs && \ + rvm ruby-2.3 do gem install fake_sqs -v ${FAKE_SQS_VERSION} --no-ri --no-rdoc + + USER root ++WORKDIR / + + EXPOSE 4568 +``` + +The first change is our path change, to use the airdock_rvm image we're managing, instead of upstream's latest. +The second and third change happens at the place in this file where it fails. On my machine, the mkdir fails, as the ruby user cannot create this directory. to solve this, we perform the directory creation an root, THEN do our rvm work. + +Now, let's look through the sed that did that. +The first sed command in this rule changed the path on the FROM line, just like the similar sed statement in the last make rule we were looking at. +The second sed command added a 'WORKDIR /' to the bottom of the Dockerfile, after the USER root. +The third SED command changes the USER line at the top of the file to using the root user to run the next command, instead of the ruby user. +Finally, the fourth SED command changes the first RUN command into two run commands. one creates the directory and makes sure we have permissions to it, while the second runs our command. the sed command also inserts commands to change user to ruby, and change working directories to the directory created in the first RUN command. + +Structurally, the first, second, and third sed command are all pretty standard things we've seen before. The fourth command looks a little different, but really, it's the same sort of substitution, only it adds several lines. At the end of the statement is some tricky escaping. +'&' characters must be escaped, because in sed, an '&' character is shorthand for 'the entire patern that we matched'. That will be important, later. the single '\' character has to be escaped into '\\\\'. + +Note that when we wrote our 'clean' rule, we added these '-all' directories manually, to make sure they would get deleted. + +##### Checkout, Copy, Modify Multiline + +elasticsearch and cassandra's checkouts are complicated, as they do a bit of injection of code into the docker entrypoint script. The entrypoint script is the script that is launched when you run a docker image. It's responsible for reading in environment variables, setting up the service that the docker image is supposed to run, and then running the service. For both elasticsearch and cassandra, we do a multiline insert, and we do it with multiple chained commands. + +Let's look at elasticsearch, as these two rules are almost identical. + +```bash +$ cat Makefile | sed -n -E '/^(ela)/{:n;N;s/\n\t/\n /;tn;p}' +elasticsearch/Dockerfile: + git clone https://github.com/blacktop/docker-elasticsearch-alpine.git elasticsearch-all + cd elasticsearch-all && git reset --hard $(ELASTICSEARCH_COMMIT) + cp -R elasticsearch-all/5.6/ elasticsearch + # add a block to the entrypoint script to interpret CS_JVM_OPTIONS, modifying the jvm.options before launching elasticsearch. + # first, add a marker to be replaced before the last if. + ${SED} -i.bak -r ':a;$$!{N;ba};s/^(.*)(\n?)fi/\2\1fi\nREPLACEME/' elasticsearch/elastic-entrypoint.sh + # next, load our variables. + ${SED} -i.bak 's@REPLACEME@MY_APP_CONFIG="/usr/share/elasticsearch/config/"\n&@' elasticsearch/elastic-entrypoint.sh + # add our parser and replacer. + ${SED} -i.bak $$'s@REPLACEME@if [ ! -z "$${JVM_OPTIONS_ES}" ]; then\\nfor x in $${JVM_OPTIONS_ES}; do { l="$${x%%=*}"; r=""; e=""; [ "$$x" != "$${x/=//}" ] \&\& e="=" \&\& r="$${x##*=}"; [ "$$x" != "$${x##-Xm?}" ] \&\& r="$${x##-Xm?}" \&\& l="$${x%%$$r}"; echo $$l $$e $$r; sed -i.bak -r \'s/^[# ]?(\'"$$l$$e"\').*/\\\\1\'"$$r"\'/\' "$$MY_APP_CONFIG/jvm.options"; diff "$$MY_APP_CONFIG/jvm.options.bak" "$$MY_APP_CONFIG/jvm.options" \&\& echo "no difference"; } done;\\nfi\\n&@' elasticsearch/elastic-entrypoint.sh + # remove the marker we added earlier. + ${SED} -i.bak 's@REPLACEME@@' elasticsearch/elastic-entrypoint.sh + +$ +``` + +In this rule, we're checking out the git tree, and copying one directory that contains our Dockerfile, and our entrypoint for elasticsearch. Following that, we have four sed commands, one of which inserts some very complicated bash. + +SED ABUSE: +Our first sed command in this rule uses a new trick. We're using -i to edit in place, and -r to quash output. Instead of starting with a match(/.../) or a substitution (s/thing/otherthing/), we immediately start with a label. let's break down this command. + +```sed +:a; # an anchor, we can loop back to. +$!{ # enter here only if there is content to be read from the file. note that to get this "$" past make, we had to escape it, by replacing it with $$. +N; # pull in the next line of content into the pattern space +ba # branch to the 'a' label. +}; +s/(.*)(\n?)fi/\2\1fi\nREPLACEME/ #match everything up to the last 'fi' and replace it with a 'fi', a new line, and REPLACEME +``` + +What does that effectively do? the source file contains a lot of lines with 'fi' in them, by inserting REPLACEME after the last one, this gives us an anchor point, that we can safely run simpler sed commands against. + +for instance, our next sed command: +```sed +s@REPLACEME@MY_APP_CONFIG="/usr/share/elasticsearch/config/"\n&@ +``` + +the 's' on this command is using '@' symbols to seperate the pattern from the replacement. it operates by finding the 'REPLACEME' that we inserted with the last command. As we touched on earlier, the unescaped '&' at the end of this replacement repeats back the patern, in the replacement. This effectively means that this line replaces REPLACEME with a new line of code, and puts the REPLACEME after the line it inserted. + +BASH ABUSE: +The next sed command works similarly, however it inserts an extremely complicated pile of bash on one line. Let's take a look at it. I'm going to remove some semicolons, remove some of the escaping, and insert line breaks and comments, to make this a bit more readable. +```bash +if [ ! -z "$${JVM_OPTIONS_ES}" ]; then # only if JVM_OPTIONS_ES was set when docker was run + for x in $${JVM_OPTIONS_ES} + do { + # set l to everything to the left of an equal sign. + l="${x%%=*}" + # clear out r and e. + r="" + e="" + # if there was an equal sign, set e to an equal sign, and set r to everything after the equal sign. + [ "$x" != "${x/=//}" ] && e="=" && r="$${x##*=}" + # if there was an '-Xm' (a java memory option), set r to the content after the (-XM), and set l to the -XM + [ "$x" != "${x##-Xm?}" ] && r="$${x##-Xm?}" && l="${x%%$r}" + # debugging code. echo what we saw. + echo $l $e $r + # perform a substitution, uncommenting a line found that starts with $l$e, and replacing it with $l$e$r. + sed -i.bak -r 's/^[# ]?('"$l$e"').*/\1'"$r"'/' "$MY_APP_CONFIG/jvm.options" + # show that a change was done with diff, or say there was no difference. + diff "$$MY_APP_CONFIG/jvm.options.bak" "$MY_APP_CONFIG/jvm.options" && echo "no difference"; + } done; +fi +``` + +What this bash script is doing is, it looks for a JVM_OPTIONS_ES environment variable, and if it finds it, it rewrites the jvm.options file, uncommenting and replacing the values for java options. This allows us to change the memory pool settings, and possibly other settings, by setting a variable in the docker compose file that starts up our integration test. + +This bash script is inserted by a sed command and CONTAINS a sed command, and lots of special characters. The quoting of this is handled a bit differently: instead of just surrounding our sed command in '' characters, we use $'', which is bash for "use C style escaping here". + +SED ABUSE: +the bash script above uses a relatively normal sed command, but intersparces it with ' and " characters, in order to pass the sed command in groups of '' characters, while using "" around the sections that we have variables in. bash will substitute variables in doublequotes, but will not substitute them in single quotes. +This substitution command uses slashes as its separators. it starts by anchoring to the beginning of the line, and matching against either a single '#' character, or a single space. it does this by grouping the space and # in a character class ([ and ]), then using a question mark to indicate "maybe one of these.". the substitution continues by matching the bash variables $l and $e, saving them in \1, matching (and therefore removing) anything else on the line, and replacing the line with \1, followed immediately by the contents of the bash variable $r. + +The cassandra/Dockerfile rule is almost identical to this last rule, only substituting out the name of the variable we expect from docker to CS_JVM_OPTIONS, and changing the path to the jvm.options file. + +# Pitfalls I fell into writing this. + +The first large mistake I made when writing this, is that the root of the makefile's dependency tree contained both images that had dependencies, and the dependent images themselves. This had me writing methods to keep the image build process from stepping on itsself. what was happening is that, in the case of the airdock-* and localstack images, when trying to build all of the images at once, make would race all the way down to the git clone steps, and run the git clone multiple times at the same time, where it just needs to be run once. + +The second was that I didn't really understand that manifest files refer to dockerhub only, not to the local machine. This was giving me similar race conditions, where an image build for architecture A would complete, and try to build the manifest when architecture B was still building. + +The third was writing really complicated SED and BASH and MAKE. ;p \ No newline at end of file From 494e6c54476af8789609016b68fdccaa1f2d55f4 Mon Sep 17 00:00:00 2001 From: Chris Penner Date: Tue, 19 Mar 2019 14:25:23 +0100 Subject: [PATCH 13/23] SCIM delete user endpoint (#660) Implement SCIM delete user endpoint in spar --- services/spar/src/Spar/App.hs | 10 +- services/spar/src/Spar/Data.hs | 94 ++++++++---- services/spar/src/Spar/Error.hs | 3 + services/spar/src/Spar/Intra/Brig.hs | 136 ++++++++++++------ services/spar/src/Spar/Scim/User.hs | 61 +++++--- .../test-integration/Test/Spar/AppSpec.hs | 2 +- .../test-integration/Test/Spar/DataSpec.hs | 17 ++- .../Test/Spar/Scim/UserSpec.hs | 51 ++++++- services/spar/test-integration/Util/Core.hs | 4 +- stack.yaml | 2 +- 10 files changed, 267 insertions(+), 113 deletions(-) diff --git a/services/spar/src/Spar/App.hs b/services/spar/src/Spar/App.hs index dfd5df682a0..f0b61ef827a 100644 --- a/services/spar/src/Spar/App.hs +++ b/services/spar/src/Spar/App.hs @@ -127,7 +127,7 @@ wrapMonadClient action = do (throwSpar . SparCassandraError . cs . show @SomeException) insertUser :: SAML.UserRef -> UserId -> Spar () -insertUser uref uid = wrapMonadClient $ Data.insertUser uref uid +insertUser uref uid = wrapMonadClient $ Data.insertSAMLUser uref uid -- | Look up user locally, then in brig, then return the 'UserId'. If either lookup fails, return -- 'Nothing'. See also: 'Spar.App.createUser'. @@ -136,7 +136,7 @@ insertUser uref uid = wrapMonadClient $ Data.insertUser uref uid -- brig or galley crashing) will cause the lookup here to yield invalid user. getUser :: SAML.UserRef -> Spar (Maybe UserId) getUser uref = do - muid <- wrapMonadClient $ Data.getUser uref + muid <- wrapMonadClient $ Data.getSAMLUser uref case muid of Nothing -> pure Nothing Just uid -> do @@ -170,7 +170,7 @@ createUser_ :: UserId -> SAML.UserRef -> Maybe Name -> ManagedBy -> Spar () createUser_ buid suid mbName managedBy = do teamid <- (^. idpExtraInfo) <$> getIdPConfigByIssuer (suid ^. uidTenant) insertUser suid buid - buid' <- Intra.createUser suid buid teamid mbName managedBy + buid' <- Intra.createBrigUser suid buid teamid mbName managedBy assert (buid == buid') $ pure () -- | Check if 'UserId' is in the team that hosts the idp that owns the 'UserRef'. If so, write the @@ -178,11 +178,11 @@ createUser_ buid suid mbName managedBy = do bindUser :: UserId -> SAML.UserRef -> Spar UserId bindUser buid userref = do teamid <- (^. idpExtraInfo) <$> getIdPConfigByIssuer (userref ^. uidTenant) - uteamid <- Intra.getUserTeam buid + uteamid <- Intra.getBrigUserTeam buid unless (uteamid == Just teamid) (throwSpar . SparBindFromWrongOrNoTeam . cs . show $ uteamid) insertUser userref buid - Intra.bindUser buid userref >>= \case + Intra.bindBrigUser buid userref >>= \case True -> pure buid False -> do SAML.logger SAML.Warn $ "SparBindUserDisappearedFromBrig: " <> show buid diff --git a/services/spar/src/Spar/Data.hs b/services/spar/src/Spar/Data.hs index 1961d99bafe..cc6ecd3a7d3 100644 --- a/services/spar/src/Spar/Data.hs +++ b/services/spar/src/Spar/Data.hs @@ -7,15 +7,23 @@ module Spar.Data , Env(..) , mkEnv , mkTTLAssertions + -- * SAML state handling , storeAReqID, unStoreAReqID, isAliveAReqID , storeAssID, unStoreAssID, isAliveAssID , storeVerdictFormat , getVerdictFormat - , insertUser - , getUser - , deleteUsersByIssuer + + -- * SAML Users + , insertSAMLUser + , getSAMLUser + , deleteSAMLUsersByIssuer + , deleteSAMLUser + + -- * Cookies , insertBindCookie , lookupBindCookie + + -- * IDPs , storeIdPConfig , getIdPConfig , getIdPConfigByIssuer @@ -35,6 +43,7 @@ module Spar.Data , insertScimUser , getScimUser , getScimUsers + , deleteScimUser ) where import Imports @@ -51,6 +60,7 @@ import Spar.Data.Instances (VerdictFormatRow, VerdictFormatCon, fromVerdictForma import Spar.Types import Spar.Scim.Types import URI.ByteString +import Text.RawString.QQ import qualified Data.List.NonEmpty as NL import qualified SAML2.WebSSO as SAML @@ -196,25 +206,32 @@ getVerdictFormat req = (>>= toVerdictFormat) <$> -- user -- | Add new user. If user with this 'SAML.UserId' exists, overwrite it. -insertUser :: (HasCallStack, MonadClient m) => SAML.UserRef -> UserId -> m () -insertUser (SAML.UserRef tenant subject) uid = retry x5 . write ins $ params Quorum (tenant, subject, uid) +insertSAMLUser :: (HasCallStack, MonadClient m) => SAML.UserRef -> UserId -> m () +insertSAMLUser (SAML.UserRef tenant subject) uid = retry x5 . write ins $ params Quorum (tenant, subject, uid) where ins :: PrepQuery W (SAML.Issuer, SAML.NameID, UserId) () ins = "INSERT INTO user (issuer, sso_id, uid) VALUES (?, ?, ?)" -getUser :: (HasCallStack, MonadClient m) => SAML.UserRef -> m (Maybe UserId) -getUser (SAML.UserRef tenant subject) = runIdentity <$$> +getSAMLUser :: (HasCallStack, MonadClient m) => SAML.UserRef -> m (Maybe UserId) +getSAMLUser (SAML.UserRef tenant subject) = runIdentity <$$> (retry x1 . query1 sel $ params Quorum (tenant, subject)) where sel :: PrepQuery R (SAML.Issuer, SAML.NameID) (Identity UserId) sel = "SELECT uid FROM user WHERE issuer = ? AND sso_id = ?" -deleteUsersByIssuer :: (HasCallStack, MonadClient m) => SAML.Issuer -> m () -deleteUsersByIssuer issuer = retry x5 . write del $ params Quorum (Identity issuer) +deleteSAMLUsersByIssuer :: (HasCallStack, MonadClient m) => SAML.Issuer -> m () +deleteSAMLUsersByIssuer issuer = retry x5 . write del $ params Quorum (Identity issuer) where del :: PrepQuery W (Identity SAML.Issuer) () del = "DELETE FROM user WHERE issuer = ?" +-- | Delete a user from the saml users table. +deleteSAMLUser :: (HasCallStack, MonadClient m) => SAML.UserRef -> m () +deleteSAMLUser (SAML.UserRef tenant subject) = retry x5 . write del $ params Quorum (tenant, subject) + where + del :: PrepQuery W (SAML.Issuer, SAML.NameID) () + del = "DELETE FROM user WHERE issuer = ? AND sso_id = ?" + ---------------------------------------------------------------------- -- bind cookies @@ -362,7 +379,7 @@ deleteTeam team = do for_ idps $ \idp -> do let idpid = idp ^. SAML.idpId issuer = idp ^. SAML.idpMetadata . SAML.edIssuer - deleteUsersByIssuer issuer + deleteSAMLUsersByIssuer issuer deleteIdPConfig idpid issuer team ---------------------------------------------------------------------- @@ -386,12 +403,16 @@ insertScimToken token ScimTokenInfo{..} = retry x5 $ batch $ do addPrepQuery insByTeam (token, stiTeam, stiId, stiCreatedAt, stiIdP, stiDescr) where insByToken, insByTeam :: PrepQuery W ScimTokenRow () - insByToken = "INSERT INTO team_provisioning_by_token \ - \(token_, team, id, created_at, idp, descr) \ - \VALUES (?, ?, ?, ?, ?, ?)" - insByTeam = "INSERT INTO team_provisioning_by_team \ - \(token_, team, id, created_at, idp, descr) \ - \VALUES (?, ?, ?, ?, ?, ?)" + insByToken = [r| + INSERT INTO team_provisioning_by_token + (token_, team, id, created_at, idp, descr) + VALUES (?, ?, ?, ?, ?, ?) + |] + insByTeam = [r| + INSERT INTO team_provisioning_by_team + (token_, team, id, created_at, idp, descr) + VALUES (?, ?, ?, ?, ?, ?) + |] -- | Check whether a token exists and if yes, what team and IdP are -- associated with it. @@ -403,8 +424,10 @@ lookupScimToken token = do pure $ fmap fromScimTokenRow mbRow where sel :: PrepQuery R (Identity ScimToken) ScimTokenRow - sel = "SELECT token_, team, id, created_at, idp, descr \ - \FROM team_provisioning_by_token WHERE token_ = ?" + sel = [r| + SELECT token_, team, id, created_at, idp, descr + FROM team_provisioning_by_token WHERE token_ = ? + |] -- | List all tokens associated with a team, in the order of their creation. getScimTokens @@ -417,8 +440,10 @@ getScimTokens team = do pure $ sortOn stiCreatedAt $ map fromScimTokenRow rows where sel :: PrepQuery R (Identity TeamId) ScimTokenRow - sel = "SELECT token_, team, id, created_at, idp, descr \ - \FROM team_provisioning_by_team WHERE team = ?" + sel = [r| + SELECT token_, team, id, created_at, idp, descr + FROM team_provisioning_by_team WHERE team = ? + |] -- | Delete a token. deleteScimToken @@ -434,16 +459,22 @@ deleteScimToken team tokenid = do addPrepQuery delByToken (Identity token) where selById :: PrepQuery R (TeamId, ScimTokenId) (Identity ScimToken) - selById = "SELECT token_ FROM team_provisioning_by_team \ - \WHERE team = ? AND id = ?" + selById = [r| + SELECT token_ FROM team_provisioning_by_team + WHERE team = ? AND id = ? + |] delById :: PrepQuery W (TeamId, ScimTokenId) () - delById = "DELETE FROM team_provisioning_by_team \ - \WHERE team = ? AND id = ?" + delById = [r| + DELETE FROM team_provisioning_by_team + WHERE team = ? AND id = ? + |] delByToken :: PrepQuery W (Identity ScimToken) () - delByToken = "DELETE FROM team_provisioning_by_token \ - \WHERE token_ = ?" + delByToken = [r| + DELETE FROM team_provisioning_by_token + WHERE token_ = ? + |] -- | Delete all tokens belonging to a team. deleteTeamScimTokens @@ -505,3 +536,14 @@ getScimUsers uids = runIdentity <$$> sel :: PrepQuery R (Identity [UserId]) (Identity (ScimC.User.StoredUser ScimUserExtra)) sel = "SELECT json FROM scim_user WHERE id in ?" + + +-- | Delete a SCIM user by id. +-- You'll also want to ensure they are deleted in Brig and in the SAML Users table. +deleteScimUser + :: (HasCallStack, MonadClient m) + => UserId -> m () +deleteScimUser uid = retry x5 . write del $ params Quorum (Identity uid) + where + del :: PrepQuery W (Identity UserId) () + del = "DELETE FROM scim_user WHERE id = ?" diff --git a/services/spar/src/Spar/Error.hs b/services/spar/src/Spar/Error.hs index c6eed43ce02..c94b139f3c5 100644 --- a/services/spar/src/Spar/Error.hs +++ b/services/spar/src/Spar/Error.hs @@ -29,6 +29,9 @@ import qualified SAML2.WebSSO as SAML type SparError = SAML.Error SparCustomError +-- FUTUREWORK: This instance should probably be inside saml2-web-sso instead. +instance Exception SparError + throwSpar :: MonadError SparError m => SparCustomError -> m a throwSpar = throwError . SAML.CustomError diff --git a/services/spar/src/Spar/Intra/Brig.hs b/services/spar/src/Spar/Intra/Brig.hs index afaf22d66bb..0202685afd0 100644 --- a/services/spar/src/Spar/Intra/Brig.hs +++ b/services/spar/src/Spar/Intra/Brig.hs @@ -1,7 +1,28 @@ {-# LANGUAGE GeneralizedNewtypeDeriving #-} -- | Client functions for interacting with the Brig API. -module Spar.Intra.Brig where +module Spar.Intra.Brig + ( toUserSSOId + , fromUserSSOId + , getBrigUser + , getBrigUserTeam + , getBrigUsers + , getBrigUserByHandle + , setBrigUserName + , setBrigUserHandle + , setBrigUserManagedBy + , setBrigUserRichInfo + , bindBrigUser + , deleteBrigUser + , createBrigUser + , isTeamUser + , getZUsrOwnedTeam + , ensureReAuthorised + , ssoLogin + , parseResponse + + , MonadSparToBrig(..) + ) where -- TODO: when creating user, we need to be able to provide more -- master data (first name, last name, ...) @@ -69,7 +90,7 @@ instance MonadSparToBrig m => MonadSparToBrig (ReaderT r m) where -- | Create a user on brig. -createUser +createBrigUser :: (HasCallStack, MonadSparToBrig m) => SAML.UserRef -- ^ SSO identity -> UserId @@ -77,7 +98,7 @@ createUser -> Maybe Name -- ^ User name (if 'Nothing', the subject ID will be used) -> ManagedBy -- ^ Who should have control over the user -> m UserId -createUser suid (Id buid) teamid mbName managedBy = do +createBrigUser suid (Id buid) teamid mbName managedBy = do uname :: Name <- case mbName of Just n -> pure n Nothing -> do @@ -109,17 +130,18 @@ createUser suid (Id buid) teamid mbName managedBy = do $ method POST . path "/i/users" . json newUser - if | statusCode resp < 300 + let sCode = statusCode resp + if | sCode < 300 -> userId . selfUser <$> parseResponse @SelfProfile resp - | inRange (400, 499) (statusCode resp) + | inRange (400, 499) sCode -> throwSpar . SparBrigErrorWith (responseStatus resp) $ "create user failed" | otherwise - -> throwSpar . SparBrigError . cs $ "create user failed with status " <> show (statusCode resp) + -> throwSpar . SparBrigError . cs $ "create user failed with status " <> show sCode -- | Get a user; returns 'Nothing' if the user was not found or has been deleted. -getUser :: (HasCallStack, MonadSparToBrig m) => UserId -> m (Maybe User) -getUser buid = do +getBrigUser :: (HasCallStack, MonadSparToBrig m) => UserId -> m (Maybe User) +getBrigUser buid = do resp :: Response (Maybe LBS) <- call $ method GET . path "/self" @@ -136,15 +158,15 @@ getUser buid = do -- | Get a list of users; returns a shorter list if some 'UserId's come up empty (no errors). -- -- TODO: implement an internal end-point on brig that makes this possible with one request. -getUsers :: (HasCallStack, MonadSparToBrig m) => [UserId] -> m [User] -getUsers = fmap catMaybes . mapM getUser +getBrigUsers :: (HasCallStack, MonadSparToBrig m) => [UserId] -> m [User] +getBrigUsers = fmap catMaybes . mapM getBrigUser -- | Get a user; returns 'Nothing' if the user was not found. -- -- TODO: currently this is not used, but it might be useful later when/if -- @hscim@ stops doing checks during user creation. -getUserByHandle :: (HasCallStack, MonadSparToBrig m) => Handle -> m (Maybe User) -getUserByHandle handle = do +getBrigUserByHandle :: (HasCallStack, MonadSparToBrig m) => Handle -> m (Maybe User) +getBrigUserByHandle handle = do resp :: Response (Maybe LBS) <- call $ method GET . path "/i/users" @@ -161,8 +183,8 @@ getUserByHandle handle = do -- | Set user' name. Fails with status <500 if brig fails with <500, and with 500 if brig -- fails with >= 500. -setName :: (HasCallStack, MonadSparToBrig m) => UserId -> Name -> m () -setName buid name = do +setBrigUserName :: (HasCallStack, MonadSparToBrig m) => UserId -> Name -> m () +setBrigUserName buid name = do resp <- call $ method PUT . path "/self" @@ -174,84 +196,102 @@ setName buid name = do , uupAssets = Nothing , uupAccentId = Nothing } - if | statusCode resp < 300 + let sCode = statusCode resp + if | sCode < 300 -> pure () - | inRange (400, 499) (statusCode resp) + | inRange (400, 499) sCode -> throwSpar . SparBrigErrorWith (responseStatus resp) $ "set name failed" | otherwise - -> throwSpar . SparBrigError . cs $ "set name failed with status " <> show (statusCode resp) + -> throwSpar . SparBrigError . cs $ "set name failed with status " <> show sCode -- | Set user's handle. Fails with status <500 if brig fails with <500, and with 500 if brig fails -- with >= 500. -setHandle :: (HasCallStack, MonadSparToBrig m) => UserId -> Handle -> m () -setHandle buid (Handle handle) = do +setBrigUserHandle :: (HasCallStack, MonadSparToBrig m) => UserId -> Handle -> m () +setBrigUserHandle buid (Handle handle) = do resp <- call $ method PUT . path "/self/handle" . header "Z-User" (toByteString' buid) . header "Z-Connection" "" . json (HandleUpdate handle) - if | statusCode resp < 300 + let sCode = statusCode resp + if | sCode < 300 -> pure () - | inRange (400, 499) (statusCode resp) + | inRange (400, 499) sCode -> throwSpar . SparBrigErrorWith (responseStatus resp) $ "set handle failed" | otherwise - -> throwSpar . SparBrigError . cs $ "set handle failed with status " <> show (statusCode resp) + -> throwSpar . SparBrigError . cs $ "set handle failed with status " <> show sCode -- | Set user's managedBy. Fails with status <500 if brig fails with <500, and with 500 if -- brig fails with >= 500. -setManagedBy :: (HasCallStack, MonadSparToBrig m) => UserId -> ManagedBy -> m () -setManagedBy buid managedBy = do +setBrigUserManagedBy :: (HasCallStack, MonadSparToBrig m) => UserId -> ManagedBy -> m () +setBrigUserManagedBy buid managedBy = do resp <- call $ method PUT . paths ["i", "users", toByteString' buid, "managed-by"] . json (ManagedByUpdate managedBy) - if | statusCode resp < 300 + let sCode = statusCode resp + if | sCode < 300 -> pure () - | inRange (400, 499) (statusCode resp) + | inRange (400, 499) sCode -> throwSpar . SparBrigErrorWith (responseStatus resp) $ "set managedBy failed" | otherwise - -> throwSpar . SparBrigError . cs $ "set managedBy failed with status " <> show (statusCode resp) + -> throwSpar . SparBrigError . cs $ "set managedBy failed with status " <> show sCode -- | Set user's richInfo. Fails with status <500 if brig fails with <500, and with 500 if -- brig fails with >= 500. -setRichInfo :: (HasCallStack, MonadSparToBrig m) => UserId -> RichInfo -> m () -setRichInfo buid richInfo = do +setBrigUserRichInfo :: (HasCallStack, MonadSparToBrig m) => UserId -> RichInfo -> m () +setBrigUserRichInfo buid richInfo = do resp <- call $ method PUT . paths ["i", "users", toByteString' buid, "rich-info"] . json (RichInfoUpdate richInfo) - if | statusCode resp < 300 + let sCode = statusCode resp + if | sCode < 300 -> pure () - | inRange (400, 499) (statusCode resp) + | inRange (400, 499) sCode -> throwSpar . SparBrigErrorWith (responseStatus resp) $ "set richInfo failed" | otherwise - -> throwSpar . SparBrigError . cs $ "set richInfo failed with status " <> show (statusCode resp) + -> throwSpar . SparBrigError . cs $ "set richInfo failed with status " <> show sCode -- | This works under the assumption that the user must exist on brig. If it does not, brig -- responds with 404 and this function returns 'False'. -bindUser :: (HasCallStack, MonadSparToBrig m) => UserId -> SAML.UserRef -> m Bool -bindUser uid (toUserSSOId -> ussoid) = do +bindBrigUser :: (HasCallStack, MonadSparToBrig m) => UserId -> SAML.UserRef -> m Bool +bindBrigUser uid (toUserSSOId -> ussoid) = do resp <- call $ method PUT . paths ["/i/users", toByteString' uid, "sso-id"] . json ussoid pure $ Bilge.statusCode resp < 300 +-- | Call brig to delete a user +deleteBrigUser :: (HasCallStack, MonadSparToBrig m, MonadIO m) => UserId -> m () +deleteBrigUser buid = do + resp :: Response (Maybe LBS) <- call + $ method DELETE + . paths ["/i/users", toByteString' buid] + let sCode = statusCode resp + if + | sCode < 300 -> pure () + | inRange (400, 499) sCode + -> throwSpar . SparBrigErrorWith (responseStatus resp) $ "failed to delete user" + | otherwise -> throwSpar . SparBrigError . cs + $ "delete user failed with status " <> show sCode + -- | Check that a user id exists on brig and has a team id. isTeamUser :: (HasCallStack, MonadSparToBrig m) => UserId -> m Bool -isTeamUser buid = isJust <$> getUserTeam buid +isTeamUser buid = isJust <$> getBrigUserTeam buid -- | Check that a user id exists on brig and has a team id. -getUserTeam :: (HasCallStack, MonadSparToBrig m) => UserId -> m (Maybe TeamId) -getUserTeam buid = do - usr <- getUser buid +getBrigUserTeam :: (HasCallStack, MonadSparToBrig m) => UserId -> m (Maybe TeamId) +getBrigUserTeam buid = do + usr <- getBrigUser buid pure $ userTeam =<< usr -- | If user is not in team, throw 'SparNotInTeam'; if user is in team but not owner, throw -- 'SparNotTeamOwner'; otherwise, return. assertIsTeamOwner :: (HasCallStack, MonadSparToBrig m) => UserId -> TeamId -> m () assertIsTeamOwner buid tid = do - self <- maybe (throwSpar SparNotInTeam) pure =<< getUser buid + self <- maybe (throwSpar SparNotInTeam) pure =<< getBrigUser buid when (userTeam self /= Just tid) $ (throwSpar SparNotInTeam) resp :: Response (Maybe LBS) <- call $ method GET @@ -265,7 +305,7 @@ getZUsrOwnedTeam :: (HasCallStack, SAML.SP m, MonadSparToBrig m) => Maybe UserId -> m TeamId getZUsrOwnedTeam Nothing = throwSpar SparMissingZUsr getZUsrOwnedTeam (Just uid) = do - usr <- getUser uid + usr <- getBrigUser uid case userTeam =<< usr of Nothing -> throwSpar SparNotInTeam Just teamid -> teamid <$ assertIsTeamOwner uid teamid @@ -279,14 +319,15 @@ ensureReAuthorised (Just uid) secret = do $ method GET . paths ["/i/users", toByteString' uid, "reauthenticate"] . json (ReAuthUser secret) - if | statusCode resp == 200 + let sCode = statusCode resp + if | sCode== 200 -> pure () - | statusCode resp == 403 + | sCode == 403 -> throwSpar SparReAuthRequired - | inRange (400, 499) (statusCode resp) + | inRange (400, 499) sCode -> throwSpar . SparBrigErrorWith (responseStatus resp) $ "reauthentication failed" | otherwise - -> throwSpar . SparBrigError . cs $ "reauthentication failed with status " <> show (statusCode resp) + -> throwSpar . SparBrigError . cs $ "reauthentication failed with status " <> show sCode -- | Get persistent cookie from brig and redirect user past login process. -- @@ -299,9 +340,10 @@ ssoLogin buid = do . path "/i/sso-login" . json (SsoLogin buid Nothing) . queryItem "persist" "true" - if | statusCode resp < 300 + let sCode = statusCode resp + if | sCode < 300 -> Just <$> respToCookie resp - | inRange (400, 499) (statusCode resp) + | inRange (400, 499) sCode -> pure Nothing | otherwise - -> throwSpar . SparBrigError . cs $ "sso-login failed with status " <> show (statusCode resp) + -> throwSpar . SparBrigError . cs $ "sso-login failed with status " <> show sCode diff --git a/services/spar/src/Spar/Scim/User.hs b/services/spar/src/Spar/Scim/User.hs index b54624c09fe..be51c0f9142 100644 --- a/services/spar/src/Spar/Scim/User.hs +++ b/services/spar/src/Spar/Scim/User.hs @@ -32,7 +32,8 @@ import Data.Range import Data.String.Conversions import Galley.Types.Teams as Galley import Network.URI -import Spar.App (Spar, Env, wrapMonadClient, sparCtxOpts, createUser_, wrapMonadClient) + +import Spar.App (Spar, Env, wrapMonadClient, sparCtxOpts, sparCtxLogger, createUser_, wrapMonadClient) import Spar.Intra.Galley import Spar.Scim.Types import Spar.Scim.Auth () @@ -44,6 +45,7 @@ import qualified SAML2.WebSSO as SAML import qualified Spar.Data as Data import qualified Spar.Intra.Brig as Intra.Brig import qualified URI.ByteString as URIBS +import qualified System.Logger as Log import qualified Web.Scim.Class.User as Scim import qualified Web.Scim.Filter as Scim @@ -70,7 +72,7 @@ instance Scim.UserDB Spar where members <- lift $ getTeamMembers stiTeam brigusers :: [User] <- filter (not . userDeleted) <$> - lift (Intra.Brig.getUsers ((^. Galley.userId) <$> members)) + lift (Intra.Brig.getBrigUsers ((^. Galley.userId) <$> members)) scimusers :: [Scim.StoredUser ScimUserExtra] <- lift . wrapMonadClient . Data.getScimUsers $ Brig.userId <$> brigusers let check user = case mbFilter of @@ -90,7 +92,7 @@ instance Scim.UserDB Spar where -> Scim.ScimHandler Spar (Maybe (Scim.StoredUser ScimUserExtra)) get ScimTokenInfo{stiTeam} uidText = do uid <- parseUid uidText - mbBrigUser <- lift (Intra.Brig.getUser uid) + mbBrigUser <- lift (Intra.Brig.getBrigUser uid) if isJust mbBrigUser && (userTeam =<< mbBrigUser) == Just stiTeam then lift . wrapMonadClient . Data.getScimUser $ uid else pure Nothing @@ -110,12 +112,35 @@ instance Scim.UserDB Spar where updateValidScimUser tokinfo uidText =<< validateScimUser tokinfo newScimUser delete :: ScimTokenInfo -> Text -> Scim.ScimHandler Spar Bool - delete _ _ = - throwError $ Scim.ScimError - mempty - (Scim.Status 404) - Nothing - (Just "User delete is not implemented yet") -- TODO + delete ScimTokenInfo{stiTeam} uidText = do + uid :: UserId <- parseUid uidText + mbBrigUser <- lift (Intra.Brig.getBrigUser uid) + case mbBrigUser of + Nothing -> do + -- double-deletion gets you a 404. + throwError $ Scim.notFound "user" (cs $ show uid) + Just brigUser -> do + -- FUTUREWORK: currently it's impossible to delete the last available team owner via SCIM + -- (because that owner won't be managed by SCIM in the first place), but if it ever becomes + -- possible, we should do a check here and prohibit it. + unless (userTeam brigUser == Just stiTeam) $ + -- users from other teams get you a 404. + throwError $ Scim.notFound "user" (cs $ show uid) + ssoId <- maybe (logThenServerError $ "no userSSOId for user " <> cs uidText) + pure + $ Brig.userSSOId brigUser + uref <- either logThenServerError pure $ Intra.Brig.fromUserSSOId ssoId + lift . wrapMonadClient $ Data.deleteSAMLUser uref + lift . wrapMonadClient $ Data.deleteScimUser uid + lift $ Intra.Brig.deleteBrigUser uid + return True + where + logThenServerError :: String -> Scim.ScimHandler Spar b + logThenServerError err = do + logger <- asks sparCtxLogger + Log.err logger $ Log.msg err + throwError $ Scim.serverError "Server Error" + getMeta :: ScimTokenInfo -> Scim.ScimHandler Spar Scim.Meta getMeta _ = @@ -264,9 +289,9 @@ createValidScimUser (ValidScimUser user uref handl mbName richInfo) = do lift $ createUser_ buid uref mbName ManagedByScim -- Set user handle on brig (which can't be done during user creation yet). -- TODO: handle errors better here? - lift $ Intra.Brig.setHandle buid handl + lift $ Intra.Brig.setBrigUserHandle buid handl -- Set rich info on brig - lift $ Intra.Brig.setRichInfo buid richInfo + lift $ Intra.Brig.setBrigUserRichInfo buid richInfo pure storedUser @@ -306,16 +331,16 @@ updateValidScimUser tokinfo uidText newScimUser = do -- update 'SAML.UserRef' let uref = newScimUser ^. vsuSAMLUserRef - lift . wrapMonadClient $ Data.insertUser uref uid -- on spar - bindok <- lift $ Intra.Brig.bindUser uid uref -- on brig + lift . wrapMonadClient $ Data.insertSAMLUser uref uid -- on spar + bindok <- lift $ Intra.Brig.bindBrigUser uid uref -- on brig unless bindok . throwError $ Scim.serverError "Failed to update SAML UserRef on brig." -- this can only happen if user is found in spar.scim_user, but missing on brig. -- (internal error? race condition?) - maybe (pure ()) (lift . Intra.Brig.setName uid) $ newScimUser ^. vsuName - lift . Intra.Brig.setHandle uid $ newScimUser ^. vsuHandle - lift . Intra.Brig.setRichInfo uid $ newScimUser ^. vsuRichInfo + maybe (pure ()) (lift . Intra.Brig.setBrigUserName uid) $ newScimUser ^. vsuName + lift . Intra.Brig.setBrigUserHandle uid $ newScimUser ^. vsuHandle + lift . Intra.Brig.setBrigUserRichInfo uid $ newScimUser ^. vsuRichInfo -- store new user value to scim_user table (spar). (this must happen last, so in case -- of crash the client can repeat the operation and it won't be considered a noop.) @@ -416,10 +441,10 @@ to a single `externalId`. -} assertUserRefUnused :: UserId -> SAML.UserRef -> Scim.ScimHandler Spar () assertUserRefUnused wireUserId userRef = do - mExistingUserId <- lift $ wrapMonadClient (Data.getUser userRef) + mExistingUserId <- lift $ wrapMonadClient (Data.getSAMLUser userRef) case mExistingUserId of -- No existing user for this userRef; it's okay to set it - Nothing -> return () + Nothing -> return () -- A user exists; verify that it's the same user before updating Just existingUserId -> unless (existingUserId == wireUserId) $ diff --git a/services/spar/test-integration/Test/Spar/AppSpec.hs b/services/spar/test-integration/Test/Spar/AppSpec.hs index 2fdf17e3692..5e1d2e70a3d 100644 --- a/services/spar/test-integration/Test/Spar/AppSpec.hs +++ b/services/spar/test-integration/Test/Spar/AppSpec.hs @@ -155,5 +155,5 @@ requestAccessVerdict idp isGranted mkAuthnReq = do $ outcome qry :: [(SBS, SBS)] qry = queryPairs $ uriQuery loc - muid <- runSparCass $ Data.getUser uref + muid <- runSparCass $ Data.getSAMLUser uref pure (muid, outcome, loc, qry) diff --git a/services/spar/test-integration/Test/Spar/DataSpec.hs b/services/spar/test-integration/Test/Spar/DataSpec.hs index 34f32421891..53f734aba07 100644 --- a/services/spar/test-integration/Test/Spar/DataSpec.hs +++ b/services/spar/test-integration/Test/Spar/DataSpec.hs @@ -98,14 +98,14 @@ spec = do context "user is new" $ do it "getUser returns Nothing" $ do uref <- nextUserRef - muid <- runSparCass $ Data.getUser uref + muid <- runSparCass $ Data.getSAMLUser uref liftIO $ muid `shouldBe` Nothing it "inserts new user and responds with 201 / returns new user" $ do uref <- nextUserRef uid <- nextWireId - () <- runSparCass $ insertUser uref uid - muid <- runSparCass $ Data.getUser uref + () <- runSparCass $ Data.insertSAMLUser uref uid + muid <- runSparCass $ Data.getSAMLUser uref liftIO $ muid `shouldBe` Just uid context "user already exists (idempotency)" $ do @@ -113,9 +113,9 @@ spec = do uref <- nextUserRef uid <- nextWireId uid' <- nextWireId - () <- runSparCass $ insertUser uref uid - () <- runSparCass $ insertUser uref uid' - muid <- runSparCass $ Data.getUser uref + () <- runSparCass $ Data.insertSAMLUser uref uid + () <- runSparCass $ Data.insertSAMLUser uref uid' + muid <- runSparCass $ Data.getSAMLUser uref liftIO $ muid `shouldBe` Just uid' @@ -196,7 +196,6 @@ spec = do idps <- runSparCass $ Data.getIdPConfigsByTeam teamid liftIO $ idps `shouldBe` [] - testSPStoreID :: forall m (a :: Type). (m ~ ReaderT Data.Env (ExceptT TTLError Client), Typeable a) => (SAML.ID a -> SAML.Time -> m ()) @@ -257,10 +256,10 @@ testDeleteTeam = it "cleans up all the right tables after deletion" $ do liftIO $ tokens `shouldBe` [] -- The users from 'user': do let Right uref1 = fromUserSSOId ssoid1 - mbUser1 <- runSparCass $ Data.getUser uref1 + mbUser1 <- runSparCass $ Data.getSAMLUser uref1 liftIO $ mbUser1 `shouldBe` Nothing do let Right uref2 = fromUserSSOId ssoid2 - mbUser2 <- runSparCass $ Data.getUser uref2 + mbUser2 <- runSparCass $ Data.getSAMLUser uref2 liftIO $ mbUser2 `shouldBe` Nothing -- The config from 'idp': do mbIdp <- runSparCass $ Data.getIdPConfig (idp ^. SAML.idpId) diff --git a/services/spar/test-integration/Test/Spar/Scim/UserSpec.hs b/services/spar/test-integration/Test/Spar/Scim/UserSpec.hs index 39d61da294a..2ce385bfef9 100644 --- a/services/spar/test-integration/Test/Spar/Scim/UserSpec.hs +++ b/services/spar/test-integration/Test/Spar/Scim/UserSpec.hs @@ -488,7 +488,7 @@ testUpdateUserRefIndex = do _ <- updateUser tok userid user' vuser' <- either (error . show) pure $ validateScimUser' idp 999999 user' -- 999999 = some big number - muserid' <- runSparCass $ Data.getUser (vuser' ^. vsuSAMLUserRef) + muserid' <- runSparCass $ Data.getSAMLUser (vuser' ^. vsuSAMLUserRef) liftIO $ do muserid' `shouldBe` Just userid @@ -519,6 +519,52 @@ specDeleteUser = do !!! const 405 === statusCode describe "DELETE /Users/:id" $ do + it "when called twice, should first delete then 404 you" $ do + (tok, _) <- registerIdPAndScimToken + user <- randomScimUser + storedUser <- createUser tok user + let uid = scimUserId storedUser + + spar <- view teSpar + deleteUser_ (Just tok) (Just uid) spar + !!! const 204 === statusCode + deleteUser_ (Just tok) (Just uid) spar + !!! const 404 === statusCode -- https://tools.ietf.org/html/rfc7644#section-3.6 + + -- FUTUREWORK: hscim has the the following test. we should probably go through all + -- `delete` tests and see if they can move to hscim or are already included there. + + it "should return 401 if we don't provide a token" $ do + user <- randomScimUser + (tok, _) <- registerIdPAndScimToken + storedUser <- createUser tok user + spar <- view teSpar + let uid = scimUserId storedUser + deleteUser_ Nothing (Just uid) spar + !!! const 401 === statusCode + + it "should return 404 if we provide a token for a different team" $ do + (tok, _) <- registerIdPAndScimToken + user <- randomScimUser + storedUser <- createUser tok user + let uid = scimUserId storedUser + + (invalidTok, _) <- registerIdPAndScimToken + spar <- view teSpar + deleteUser_ (Just invalidTok) (Just uid) spar + !!! const 404 === statusCode + + it "getUser should return 404 after deleteUser" $ do + user <- randomScimUser + (tok, _) <- registerIdPAndScimToken + storedUser <- createUser tok user + spar <- view teSpar + let uid = scimUserId storedUser + deleteUser_ (Just tok) (Just uid) spar + !!! const 204 === statusCode + getUser_ (Just tok) uid spar + !!! const 404 === statusCode + it "whether implemented or not, does *NOT EVER* respond with 5xx!" $ do env <- ask user <- randomScimUser @@ -526,6 +572,3 @@ specDeleteUser = do storedUser <- createUser tok user deleteUser_ (Just tok) (Just $ scimUserId storedUser) (env ^. teSpar) !!! assertTrue_ (inRange (200, 499) . statusCode) - - it "sets the 'deleted' flag in brig, and does nothing otherwise." $ - pendingWith "really? how do we destroy the data then, and when?" diff --git a/services/spar/test-integration/Util/Core.hs b/services/spar/test-integration/Util/Core.hs index db352adb406..c425b7f0a94 100644 --- a/services/spar/test-integration/Util/Core.hs +++ b/services/spar/test-integration/Util/Core.hs @@ -708,7 +708,7 @@ callIdpDelete' sparreq_ muid idpid = do ssoToUidSpar :: (HasCallStack, MonadIO m, MonadReader TestEnv m) => Brig.UserSSOId -> m (Maybe UserId) ssoToUidSpar ssoid = do ssoref <- either (error . ("could not parse UserRef: " <>)) pure $ Intra.fromUserSSOId ssoid - runSparCass @Client $ Data.getUser ssoref + runSparCass @Client $ Data.getSAMLUser ssoref runSparCass :: (HasCallStack, m ~ Client, MonadIO m', MonadReader TestEnv m') @@ -790,4 +790,4 @@ getUserIdViaRef' uref = do liftIO $ retrying (exponentialBackoff 50 <> limitRetries 5) (\_ -> pure . isNothing) - (\_ -> runClient (env ^. teCql) $ Data.getUser uref) + (\_ -> runClient (env ^. teCql) $ Data.getSAMLUser uref) diff --git a/stack.yaml b/stack.yaml index de1a29e1a6a..ed9d6ba339b 100644 --- a/stack.yaml +++ b/stack.yaml @@ -40,7 +40,7 @@ extra-deps: - git: https://github.com/wireapp/saml2-web-sso commit: c03d17d656ac467350c983d5f844c199e5daceea # master (Feb 21, 2019) - git: https://github.com/wireapp/hscim - commit: 42f6018812bf0f04741231b67b1f5e790ce0d489 # master (Feb 25, 2019) + commit: b2ddde040426d332a2eddcddb00e81ffb1144a90 # master (Mar 13, 2019) - git: https://gitlab.com/fisx/tinylog commit: fd7155aaf6f090f48004a8f7857ce9d3cb4f9417 # https://gitlab.com/twittner/tinylog/merge_requests/6 From ddd3e61645877765977d01816153cd9b28e96e7f Mon Sep 17 00:00:00 2001 From: Chris Penner Date: Tue, 19 Mar 2019 14:37:40 +0100 Subject: [PATCH 14/23] Refactor Galley Tests to use Reader Pattern (#666) Introduce TestM Monad for simplification and cleanup --- libs/bilge/src/Bilge/IO.hs | 5 +- services/galley/test/integration/API.hs | 951 +++++++++--------- .../test/integration/API/MessageTimer.hs | 97 +- services/galley/test/integration/API/SQS.hs | 33 +- services/galley/test/integration/API/Teams.hs | 710 ++++++------- services/galley/test/integration/API/Util.hs | 597 ++++++----- services/galley/test/integration/Main.hs | 6 +- services/galley/test/integration/TestSetup.hs | 62 ++ 8 files changed, 1313 insertions(+), 1148 deletions(-) create mode 100644 services/galley/test/integration/TestSetup.hs diff --git a/libs/bilge/src/Bilge/IO.hs b/libs/bilge/src/Bilge/IO.hs index b2b7922dcb3..10ca0d4c010 100644 --- a/libs/bilge/src/Bilge/IO.hs +++ b/libs/bilge/src/Bilge/IO.hs @@ -82,9 +82,12 @@ newtype HttpT m a = HttpT class MonadHttp m where getManager :: m Manager -instance Monad m => MonadHttp (HttpT m) where +instance {-# OVERLAPPING #-} Monad m => MonadHttp (HttpT m) where getManager = HttpT ask +instance {-# OVERLAPPABLE #-} (MonadTrans t, MonadHttp m, Monad m) => MonadHttp (t m) where + getManager = lift getManager + instance MonadBase IO (HttpT IO) where liftBase = liftIO diff --git a/services/galley/test/integration/API.hs b/services/galley/test/integration/API.hs index 8f24b232b19..b5fb64b12e7 100644 --- a/services/galley/test/integration/API.hs +++ b/services/galley/test/integration/API.hs @@ -5,7 +5,7 @@ import API.Util import Bilge hiding (timeout) import Bilge.Assert import Brig.Types -import Control.Lens ((^.)) +import Control.Lens ((^.), view) import Data.Aeson hiding (json) import Data.ByteString.Conversion import Data.Id @@ -16,8 +16,9 @@ import Galley.Types import Gundeck.Types.Notification import Network.Wai.Utilities.Error import Test.Tasty -import Test.Tasty.Cannon (Cannon, TimeoutUnit (..), (#)) +import Test.Tasty.Cannon (TimeoutUnit (..), (#)) import Test.Tasty.HUnit +import TestSetup import API.SQS import qualified Data.Text.Ascii as Ascii @@ -32,15 +33,6 @@ import qualified Data.Text as T import qualified Test.Tasty.Cannon as WS import qualified Data.Code as Code -type TestSignature a = Galley -> Brig -> Cannon -> TestSetup -> Http a - -test :: IO TestSetup -> TestName -> (TestSignature a) -> TestTree -test s n t = testCase n runTest - where - runTest = do - setup <- s - (void $ runHttpT (manager setup) (t (galley setup) (brig setup) (cannon setup) setup)) - tests :: IO TestSetup -> TestTree tests s = testGroup "Galley integration tests" [ mainTests, Teams.tests s, MessageTimer.tests s ] @@ -103,34 +95,38 @@ tests s = testGroup "Galley integration tests" ------------------------------------------------------------------------------- -- API Tests -status :: Galley -> Brig -> Cannon -> TestSetup -> Http () -status g _ _ _ = get (g . path "/i/status") !!! - const 200 === statusCode +status :: TestM () +status = do + g <- view tsGalley + get (g . path "/i/status") !!! + const 200 === statusCode -monitor :: Galley -> Brig -> Cannon -> TestSetup -> Http () -monitor g _ _ _ = +monitor :: TestM () +monitor = do + g <- view tsGalley get (g . path "/i/monitoring") !!! do const 200 === statusCode const (Just "application/json") =~= getHeader "Content-Type" -postConvOk :: Galley -> Brig -> Cannon -> TestSetup -> Http () -postConvOk g b c _ = do - alice <- randomUser b - bob <- randomUser b - jane <- randomUser b - connectUsers b alice (list1 bob [jane]) +postConvOk :: TestM () +postConvOk = do + c <- view tsCannon + alice <- randomUser + bob <- randomUser + jane <- randomUser + connectUsers alice (list1 bob [jane]) -- Ensure name is within range, max size is 256 - postConv g alice [bob, jane] (Just (T.replicate 257 "a")) [] Nothing Nothing !!! + postConv alice [bob, jane] (Just (T.replicate 257 "a")) [] Nothing Nothing !!! const 400 === statusCode let nameMaxSize = T.replicate 256 "a" WS.bracketR3 c alice bob jane $ \(wsA, wsB, wsJ) -> do - rsp <- postConv g alice [bob, jane] (Just nameMaxSize) [] Nothing Nothing getConv g usr cnv + convView cnv usr = decodeBody' "conversation" <$> getConv usr cnv checkWs alice (cnv, ws) = WS.awaitMatch (5 # Second) ws $ \n -> do ntfTransient n @?= False let e = List1.head (WS.unpackPayload n) @@ -141,27 +137,28 @@ postConvOk g b c _ = do Just (EdConversation c') -> assertConvEquals cnv c' _ -> assertFailure "Unexpected event data" -postCryptoMessage1 :: Galley -> Brig -> Cannon -> TestSetup -> Http () -postCryptoMessage1 g b c _ = do - (alice, ac) <- randomUserWithClient b (someLastPrekeys !! 0) - (bob, bc) <- randomUserWithClient b (someLastPrekeys !! 1) - (eve, ec) <- randomUserWithClient b (someLastPrekeys !! 2) - connectUsers b alice (list1 bob [eve]) - conv <- decodeConvId <$> postConv g alice [bob, eve] (Just "gossip") [] Nothing Nothing +postCryptoMessage1 :: TestM () +postCryptoMessage1 = do + c <- view tsCannon + (alice, ac) <- randomUserWithClient (someLastPrekeys !! 0) + (bob, bc) <- randomUserWithClient (someLastPrekeys !! 1) + (eve, ec) <- randomUserWithClient (someLastPrekeys !! 2) + connectUsers alice (list1 bob [eve]) + conv <- decodeConvId <$> postConv alice [bob, eve] (Just "gossip") [] Nothing Nothing -- WS receive timeout let t = 5 # Second -- Missing eve let m1 = [(bob, bc, "ciphertext1")] - postOtrMessage id g alice ac conv m1 !!! do + postOtrMessage id alice ac conv m1 !!! do const 412 === statusCode assertTrue_ (eqMismatch [(eve, Set.singleton ec)] [] [] . decodeBody) -- Complete WS.bracketR2 c bob eve $ \(wsB, wsE) -> do let m2 = [(bob, bc, "ciphertext2"), (eve, ec, "ciphertext2")] - postOtrMessage id g alice ac conv m2 !!! do + postOtrMessage id alice ac conv m2 !!! do const 201 === statusCode assertTrue_ (eqMismatch [] [] [] . decodeBody) void . liftIO $ WS.assertMatch t wsB (wsAssertOtr conv alice ac bc "ciphertext2") @@ -170,7 +167,7 @@ postCryptoMessage1 g b c _ = do -- Redundant self WS.bracketR3 c alice bob eve $ \(wsA, wsB, wsE) -> do let m3 = [(alice, ac, "ciphertext3"), (bob, bc, "ciphertext3"), (eve, ec, "ciphertext3")] - postOtrMessage id g alice ac conv m3 !!! do + postOtrMessage id alice ac conv m3 !!! do const 201 === statusCode assertTrue_ (eqMismatch [] [(alice, Set.singleton ac)] [] . decodeBody) void . liftIO $ WS.assertMatch t wsB (wsAssertOtr conv alice ac bc "ciphertext3") @@ -180,9 +177,9 @@ postCryptoMessage1 g b c _ = do -- Deleted eve WS.bracketR2 c bob eve $ \(wsB, wsE) -> do - deleteClient b eve ec (Just $ PlainTextPassword defPassword) !!! const 200 === statusCode + deleteClient eve ec (Just $ PlainTextPassword defPassword) !!! const 200 === statusCode let m4 = [(bob, bc, "ciphertext4"), (eve, ec, "ciphertext4")] - postOtrMessage id g alice ac conv m4 !!! do + postOtrMessage id alice ac conv m4 !!! do const 201 === statusCode assertTrue_ (eqMismatch [] [] [(eve, Set.singleton ec)] . decodeBody) void . liftIO $ WS.assertMatch t wsB (wsAssertOtr conv alice ac bc "ciphertext4") @@ -192,7 +189,7 @@ postCryptoMessage1 g b c _ = do -- Deleted eve & redundant self WS.bracketR3 c alice bob eve $ \(wsA, wsB, wsE) -> do let m5 = [(bob, bc, "ciphertext5"), (eve, ec, "ciphertext5"), (alice, ac, "ciphertext5")] - postOtrMessage id g alice ac conv m5 !!! do + postOtrMessage id alice ac conv m5 !!! do const 201 === statusCode assertTrue_ (eqMismatch [] [(alice, Set.singleton ac)] [(eve, Set.singleton ec)] . decodeBody) void . liftIO $ WS.assertMatch t wsB (wsAssertOtr conv alice ac bc "ciphertext5") @@ -202,21 +199,21 @@ postCryptoMessage1 g b c _ = do -- Missing Bob, deleted eve & redundant self let m6 = [(eve, ec, "ciphertext6"), (alice, ac, "ciphertext6")] - postOtrMessage id g alice ac conv m6 !!! do + postOtrMessage id alice ac conv m6 !!! do const 412 === statusCode assertTrue_ (eqMismatch [(bob, Set.singleton bc)] [(alice, Set.singleton ac)] [(eve, Set.singleton ec)] . decodeBody) -- A second client for Bob - bc2 <- randomClient b bob (someLastPrekeys !! 3) + bc2 <- randomClient bob (someLastPrekeys !! 3) -- The first client listens for all messages of Bob WS.bracketR c bob $ \wsB -> do let cipher = "ciphertext7" -- The second client listens only for his own messages WS.bracketR (c . queryItem "client" (toByteString' bc2)) bob $ \wsB2 -> do let m7 = [(bob, bc, cipher), (bob, bc2, cipher)] - postOtrMessage id g alice ac conv m7 !!! do + postOtrMessage id alice ac conv m7 !!! do const 201 === statusCode assertTrue_ (eqMismatch [] [] [] . decodeBody) -- Bob's first client gets both messages @@ -227,16 +224,17 @@ postCryptoMessage1 g b c _ = do liftIO $ assertBool "unexpected equal clients" (bc /= bc2) assertNoMsg wsB2 (wsAssertOtr conv alice ac bc cipher) -postCryptoMessage2 :: Galley -> Brig -> Cannon -> TestSetup -> Http () -postCryptoMessage2 g b _ _ = do - (alice, ac) <- randomUserWithClient b (someLastPrekeys !! 0) - (bob, bc) <- randomUserWithClient b (someLastPrekeys !! 1) - (eve, ec) <- randomUserWithClient b (someLastPrekeys !! 2) - connectUsers b alice (list1 bob [eve]) - conv <- decodeConvId <$> postConv g alice [bob, eve] (Just "gossip") [] Nothing Nothing +postCryptoMessage2 :: TestM () +postCryptoMessage2 = do + b <- view tsBrig + (alice, ac) <- randomUserWithClient (someLastPrekeys !! 0) + (bob, bc) <- randomUserWithClient (someLastPrekeys !! 1) + (eve, ec) <- randomUserWithClient (someLastPrekeys !! 2) + connectUsers alice (list1 bob [eve]) + conv <- decodeConvId <$> postConv alice [bob, eve] (Just "gossip") [] Nothing Nothing -- Missing eve let m = [(bob, bc, "hello bob")] - r1 <- postOtrMessage id g alice ac conv m Map.lookup eve (userClientMap p) @=? Just [ec] -postCryptoMessage3 :: Galley -> Brig -> Cannon -> TestSetup -> Http () -postCryptoMessage3 g b _ _ = do - (alice, ac) <- randomUserWithClient b (someLastPrekeys !! 0) - (bob, bc) <- randomUserWithClient b (someLastPrekeys !! 1) - (eve, ec) <- randomUserWithClient b (someLastPrekeys !! 2) - connectUsers b alice (list1 bob [eve]) - conv <- decodeConvId <$> postConv g alice [bob, eve] (Just "gossip") [] Nothing Nothing +postCryptoMessage3 :: TestM () +postCryptoMessage3 = do + b <- view tsBrig + (alice, ac) <- randomUserWithClient (someLastPrekeys !! 0) + (bob, bc) <- randomUserWithClient (someLastPrekeys !! 1) + (eve, ec) <- randomUserWithClient (someLastPrekeys !! 2) + connectUsers alice (list1 bob [eve]) + conv <- decodeConvId <$> postConv alice [bob, eve] (Just "gossip") [] Nothing Nothing -- Missing eve let ciphertext = encodeCiphertext "hello bob" let m = otrRecipients [(bob, [(bc, ciphertext)])] - r1 <- postProtoOtrMessage g alice ac conv m Map.lookup eve (userClientMap p) @=? Just [ec] -postCryptoMessage4 :: Galley -> Brig -> Cannon -> TestSetup -> Http () -postCryptoMessage4 g b _ _ = do - alice <- randomUser b - bob <- randomUser b - bc <- randomClient b bob (someLastPrekeys !! 0) - connectUsers b alice (list1 bob []) - conv <- decodeConvId <$> postConv g alice [bob] (Just "gossip") [] Nothing Nothing +postCryptoMessage4 :: TestM () +postCryptoMessage4 = do + alice <- randomUser + bob <- randomUser + bc <- randomClient bob (someLastPrekeys !! 0) + connectUsers alice (list1 bob []) + conv <- decodeConvId <$> postConv alice [bob] (Just "gossip") [] Nothing Nothing -- Unknown client ID => 403 let ciphertext = encodeCiphertext "hello bob" let m = otrRecipients [(bob, [(bc, ciphertext)])] - postProtoOtrMessage g alice (ClientId "172618352518396") conv m !!! + postProtoOtrMessage alice (ClientId "172618352518396") conv m !!! const 403 === statusCode -postCryptoMessage5 :: Galley -> Brig -> Cannon -> TestSetup -> Http () -postCryptoMessage5 g b _ _ = do - (alice, ac) <- randomUserWithClient b (someLastPrekeys !! 0) - (bob, bc) <- randomUserWithClient b (someLastPrekeys !! 1) - (eve, ec) <- randomUserWithClient b (someLastPrekeys !! 2) - connectUsers b alice (list1 bob [eve]) - conv <- decodeConvId <$> postConv g alice [bob, eve] (Just "gossip") [] Nothing Nothing +postCryptoMessage5 :: TestM () +postCryptoMessage5 = do + (alice, ac) <- randomUserWithClient (someLastPrekeys !! 0) + (bob, bc) <- randomUserWithClient (someLastPrekeys !! 1) + (eve, ec) <- randomUserWithClient (someLastPrekeys !! 2) + connectUsers alice (list1 bob [eve]) + conv <- decodeConvId <$> postConv alice [bob, eve] (Just "gossip") [] Nothing Nothing -- Missing eve let m = [(bob, bc, "hello bob")] -- These three are equivalent (i.e. report all missing clients) - postOtrMessage id g alice ac conv m !!! + postOtrMessage id alice ac conv m !!! const 412 === statusCode - postOtrMessage (queryItem "ignore_missing" "false") g alice ac conv m !!! + postOtrMessage (queryItem "ignore_missing" "false") alice ac conv m !!! const 412 === statusCode - postOtrMessage (queryItem "report_missing" "true") g alice ac conv m !!! + postOtrMessage (queryItem "report_missing" "true") alice ac conv m !!! const 412 === statusCode -- These two are equivalent (i.e. ignore all missing clients) - postOtrMessage (queryItem "ignore_missing" "true") g alice ac conv m !!! + postOtrMessage (queryItem "ignore_missing" "true") alice ac conv m !!! const 201 === statusCode - postOtrMessage (queryItem "report_missing" "false") g alice ac conv m !!! + postOtrMessage (queryItem "report_missing" "false") alice ac conv m !!! const 201 === statusCode -- Report missing clients of a specific user only - postOtrMessage (queryItem "report_missing" (toByteString' bob)) g alice ac conv m !!! + postOtrMessage (queryItem "report_missing" (toByteString' bob)) alice ac conv m !!! const 201 === statusCode - _rs <- postOtrMessage (queryItem "report_missing" (toByteString' eve)) g alice ac conv [] Brig -> Cannon -> TestSetup -> Http () -postJoinConvOk g b c _ = do - alice <- randomUser b - bob <- randomUser b - conv <- decodeConvId <$> postConv g alice [] (Just "gossip") [InviteAccess, LinkAccess] Nothing Nothing +postJoinConvOk :: TestM () +postJoinConvOk = do + c <- view tsCannon + alice <- randomUser + bob <- randomUser + conv <- decodeConvId <$> postConv alice [] (Just "gossip") [InviteAccess, LinkAccess] Nothing Nothing WS.bracketR2 c alice bob $ \(wsA, wsB) -> do - postJoinConv g bob conv !!! const 200 === statusCode - postJoinConv g bob conv !!! const 204 === statusCode + postJoinConv bob conv !!! const 200 === statusCode + postJoinConv bob conv !!! const 204 === statusCode void . liftIO $ WS.assertMatchN (5 # Second) [wsA, wsB] $ wsAssertMemberJoin conv bob [bob] -postJoinCodeConvOk :: Galley -> Brig -> Cannon -> TestSetup -> Http () -postJoinCodeConvOk g b c _ = do - alice <- randomUser b - bob <- randomUser b - eve <- ephemeralUser b - dave <- ephemeralUser b - conv <- decodeConvId <$> postConv g alice [] (Just "gossip") [CodeAccess] (Just ActivatedAccessRole) Nothing - cCode <- decodeConvCodeEvent <$> postConvCode g alice conv +postJoinCodeConvOk :: TestM () +postJoinCodeConvOk = do + c <- view tsCannon + alice <- randomUser + bob <- randomUser + eve <- ephemeralUser + dave <- ephemeralUser + conv <- decodeConvId <$> postConv alice [] (Just "gossip") [CodeAccess] (Just ActivatedAccessRole) Nothing + cCode <- decodeConvCodeEvent <$> postConvCode alice conv -- currently ConversationCode is used both as return type for POST ../code and as body for ../join -- POST /code gives code,key,uri -- POST /join expects code,key @@ -353,139 +354,140 @@ postJoinCodeConvOk g b c _ = do -- with ActivatedAccess, bob can join, but not eve WS.bracketR2 c alice bob $ \(wsA, wsB) -> do -- incorrect code/key does not work - postJoinCodeConv g bob incorrectCode !!! const 404 === statusCode + postJoinCodeConv bob incorrectCode !!! const 404 === statusCode -- correct code works - postJoinCodeConv g bob payload !!! const 200 === statusCode + postJoinCodeConv bob payload !!! const 200 === statusCode -- test no-op - postJoinCodeConv g bob payload !!! const 204 === statusCode + postJoinCodeConv bob payload !!! const 204 === statusCode -- eve cannot join - postJoinCodeConv g eve payload !!! const 403 === statusCode + postJoinCodeConv eve payload !!! const 403 === statusCode void . liftIO $ WS.assertMatchN (5 # Second) [wsA, wsB] $ wsAssertMemberJoin conv bob [bob] -- changing access to non-activated should give eve access let nonActivatedAccess = ConversationAccessUpdate [CodeAccess] NonActivatedAccessRole - putAccessUpdate g alice conv nonActivatedAccess !!! const 200 === statusCode - postJoinCodeConv g eve payload !!! const 200 === statusCode + putAccessUpdate alice conv nonActivatedAccess !!! const 200 === statusCode + postJoinCodeConv eve payload !!! const 200 === statusCode -- after removing CodeAccess, no further people can join let noCodeAccess = ConversationAccessUpdate [InviteAccess] NonActivatedAccessRole - putAccessUpdate g alice conv noCodeAccess !!! const 200 === statusCode - postJoinCodeConv g dave payload !!! const 404 === statusCode - -postConvertCodeConv :: Galley -> Brig -> Cannon -> TestSetup -> Http () -postConvertCodeConv g b c _ = do - alice <- randomUser b - conv <- decodeConvId <$> postConv g alice [] (Just "gossip") [InviteAccess] Nothing Nothing + putAccessUpdate alice conv noCodeAccess !!! const 200 === statusCode + postJoinCodeConv dave payload !!! const 404 === statusCode + +postConvertCodeConv :: TestM () +postConvertCodeConv = do + c <- view tsCannon + alice <- randomUser + conv <- decodeConvId <$> postConv alice [] (Just "gossip") [InviteAccess] Nothing Nothing -- Cannot do code operations if conversation not in code access - postConvCode g alice conv !!! const 403 === statusCode - deleteConvCode g alice conv !!! const 403 === statusCode - getConvCode g alice conv !!! const 403 === statusCode + postConvCode alice conv !!! const 403 === statusCode + deleteConvCode alice conv !!! const 403 === statusCode + getConvCode alice conv !!! const 403 === statusCode -- cannot change to TeamAccessRole as not a team conversation let teamAccess = ConversationAccessUpdate [InviteAccess] TeamAccessRole - putAccessUpdate g alice conv teamAccess !!! const 403 === statusCode + putAccessUpdate alice conv teamAccess !!! const 403 === statusCode -- change access WS.bracketR c alice $ \wsA -> do let nonActivatedAccess = ConversationAccessUpdate [InviteAccess, CodeAccess] NonActivatedAccessRole - putAccessUpdate g alice conv nonActivatedAccess !!! const 200 === statusCode + putAccessUpdate alice conv nonActivatedAccess !!! const 200 === statusCode -- test no-op - putAccessUpdate g alice conv nonActivatedAccess !!! const 204 === statusCode + putAccessUpdate alice conv nonActivatedAccess !!! const 204 === statusCode void . liftIO $ WS.assertMatchN (5 # Second) [wsA] $ wsAssertConvAccessUpdate conv alice nonActivatedAccess -- Create/get/update/delete codes - getConvCode g alice conv !!! const 404 === statusCode - c1 <- decodeConvCodeEvent <$> (postConvCode g alice conv (getConvCode g alice conv (postConvCode alice conv (getConvCode alice conv (postConvCode g alice conv (postConvCode alice conv Brig -> Cannon -> TestSetup -> Http () -postConvertTeamConv g b c setup = do +postConvertTeamConv :: TestM () +postConvertTeamConv = do + c <- view tsCannon -- Create a team conversation with team-alice, team-bob, activated-eve -- Non-activated mallory can join - let a = awsEnv setup - alice <- randomUser b - tid <- createTeamInternal g "foo" alice - assertQueue "create team" a tActivate + alice <- randomUser + tid <- createTeamInternal "foo" alice + assertQueue "create team" tActivate let p1 = symmPermissions [Teams.AddRemoveConvMember] - bobMem <- (\u -> Teams.newTeamMember u p1 Nothing) <$> randomUser b - addTeamMemberInternal g tid bobMem + bobMem <- (\u -> Teams.newTeamMember u p1 Nothing) <$> randomUser + addTeamMemberInternal tid bobMem let bob = bobMem^.Teams.userId - assertQueue "team member (bob) join" a $ tUpdate 2 [alice] - daveMem <- (\u -> Teams.newTeamMember u p1 Nothing) <$> randomUser b - addTeamMemberInternal g tid daveMem + assertQueue "team member (bob) join" $ tUpdate 2 [alice] + daveMem <- (\u -> Teams.newTeamMember u p1 Nothing) <$> randomUser + addTeamMemberInternal tid daveMem let dave = daveMem^.Teams.userId - assertQueue "team member (dave) join" a $ tUpdate 3 [alice] - eve <- randomUser b - connectUsers b alice (singleton eve) + assertQueue "team member (dave) join" $ tUpdate 3 [alice] + eve <- randomUser + connectUsers alice (singleton eve) let acc = Just $ Set.fromList [InviteAccess, CodeAccess] -- creating a team-only conversation containing eve should fail - createTeamConvAccessRaw g alice tid [bob, eve] (Just "blaa") acc (Just TeamAccessRole) Nothing !!! + createTeamConvAccessRaw alice tid [bob, eve] (Just "blaa") acc (Just TeamAccessRole) Nothing !!! const 403 === statusCode -- create conversation allowing any type of guest - conv <- createTeamConvAccess g alice tid [bob, eve] (Just "blaa") acc (Just NonActivatedAccessRole) Nothing + conv <- createTeamConvAccess alice tid [bob, eve] (Just "blaa") acc (Just NonActivatedAccessRole) Nothing -- mallory joins by herself - mallory <- ephemeralUser b - j <- decodeConvCodeEvent <$> postConvCode g alice conv + mallory <- ephemeralUser + j <- decodeConvCodeEvent <$> postConvCode alice conv WS.bracketR3 c alice bob eve $ \(wsA, wsB, wsE) -> do - postJoinCodeConv g mallory j !!! const 200 === statusCode + postJoinCodeConv mallory j !!! const 200 === statusCode void . liftIO $ WS.assertMatchN (5 # Second) [wsA, wsB, wsE] $ wsAssertMemberJoin conv mallory [mallory] WS.bracketRN c [alice, bob, eve, mallory] $ \[wsA, wsB, wsE, wsM] -> do let teamAccess = ConversationAccessUpdate [InviteAccess, CodeAccess] TeamAccessRole - putAccessUpdate g alice conv teamAccess !!! const 200 === statusCode + putAccessUpdate alice conv teamAccess !!! const 200 === statusCode void . liftIO $ WS.assertMatchN (5 # Second) [wsA, wsB, wsE, wsM] $ wsAssertConvAccessUpdate conv alice teamAccess -- non-team members get kicked out void . liftIO $ WS.assertMatchN (5 # Second) [wsA, wsB, wsE, wsM] $ wsAssertMemberLeave conv alice [eve, mallory] -- joining (for mallory) is no longer possible - postJoinCodeConv g mallory j !!! const 403 === statusCode + postJoinCodeConv mallory j !!! const 403 === statusCode -- team members (dave) can still join - postJoinCodeConv g dave j !!! const 200 === statusCode - -postJoinConvFail :: Galley -> Brig -> Cannon -> TestSetup -> Http () -postJoinConvFail g b _ _ = do - alice <- randomUser b - bob <- randomUser b - conv <- decodeConvId <$> postConv g alice [] (Just "gossip") [] Nothing Nothing - void $ postJoinConv g bob conv !!! const 403 === statusCode - -getConvsOk :: Galley -> Brig -> Cannon -> TestSetup -> Http () -getConvsOk g b _ _ = do - usr <- randomUser b - getConvs g usr Nothing Nothing !!! do + postJoinCodeConv dave j !!! const 200 === statusCode + +postJoinConvFail :: TestM () +postJoinConvFail = do + alice <- randomUser + bob <- randomUser + conv <- decodeConvId <$> postConv alice [] (Just "gossip") [] Nothing Nothing + void $ postJoinConv bob conv !!! const 403 === statusCode + +getConvsOk :: TestM () +getConvsOk = do + usr <- randomUser + getConvs usr Nothing Nothing !!! do const 200 === statusCode const [toUUID usr] === map (toUUID . cnvId) . decodeConvList -getConvsOk2 :: Galley -> Brig -> Cannon -> TestSetup -> Http () -getConvsOk2 g b _ _ = do - [alice, bob] <- randomUsers b 2 - connectUsers b alice (singleton bob) +getConvsOk2 :: TestM () +getConvsOk2 = do + [alice, bob] <- randomUsers 2 + connectUsers alice (singleton bob) -- create & get one2one conv - cnv1 <- decodeBody' "conversation" <$> postO2OConv g alice bob (Just "gossip1") - getConvs g alice (Just $ Left [cnvId cnv1]) Nothing !!! do + cnv1 <- decodeBody' "conversation" <$> postO2OConv alice bob (Just "gossip1") + getConvs alice (Just $ Left [cnvId cnv1]) Nothing !!! do const 200 === statusCode const (Just [cnvId cnv1]) === fmap (map cnvId . convList) . decodeBody -- create & get group conv - carl <- randomUser b - connectUsers b alice (singleton carl) - cnv2 <- decodeBody' "conversation" <$> postConv g alice [bob, carl] (Just "gossip2") [] Nothing Nothing - getConvs g alice (Just $ Left [cnvId cnv2]) Nothing !!! do + carl <- randomUser + connectUsers alice (singleton carl) + cnv2 <- decodeBody' "conversation" <$> postConv alice [bob, carl] (Just "gossip2") [] Nothing Nothing + getConvs alice (Just $ Left [cnvId cnv2]) Nothing !!! do const 200 === statusCode const (Just [cnvId cnv2]) === fmap (map cnvId . convList) . decodeBody -- get both - rs <- getConvs g alice Nothing Nothing decodeBody rs let c1 = cs >>= find ((== cnvId cnv1) . cnvId) let c2 = cs >>= find ((== cnvId cnv2) . cnvId) @@ -499,64 +501,64 @@ getConvsOk2 g b _ _ = do assertEqual "other members mismatch" (Just []) ((\c -> cmOthers (cnvMembers c) \\ cmOthers (cnvMembers expected)) <$> actual) -getConvsFailMaxSize :: Galley -> Brig -> Cannon -> TestSetup -> Http () -getConvsFailMaxSize g b _ _ = do - usr <- randomUser b - getConvs g usr Nothing (Just 501) !!! +getConvsFailMaxSize :: TestM () +getConvsFailMaxSize = do + usr <- randomUser + getConvs usr Nothing (Just 501) !!! const 400 === statusCode -getConvIdsOk :: Galley -> Brig -> Cannon -> TestSetup -> Http () -getConvIdsOk g b _ _ = do - [alice, bob] <- randomUsers b 2 - connectUsers b alice (singleton bob) - void $ postO2OConv g alice bob (Just "gossip") - getConvIds g alice Nothing Nothing !!! do +getConvIdsOk :: TestM () +getConvIdsOk = do + [alice, bob] <- randomUsers 2 + connectUsers alice (singleton bob) + void $ postO2OConv alice bob (Just "gossip") + getConvIds alice Nothing Nothing !!! do const 200 === statusCode const 2 === length . decodeConvIdList - getConvIds g bob Nothing Nothing !!! do + getConvIds bob Nothing Nothing !!! do const 200 === statusCode const 2 === length . decodeConvIdList -paginateConvIds :: Galley -> Brig -> Cannon -> TestSetup -> Http () -paginateConvIds g b _ _ = do - [alice, bob, eve] <- randomUsers b 3 - connectUsers b alice (singleton bob) - connectUsers b alice (singleton eve) +paginateConvIds :: TestM () +paginateConvIds = do + [alice, bob, eve] <- randomUsers 3 + connectUsers alice (singleton bob) + connectUsers alice (singleton eve) replicateM_ 256 $ - postConv g alice [bob, eve] (Just "gossip") [] Nothing Nothing !!! + postConv alice [bob, eve] (Just "gossip") [] Nothing Nothing !!! const 201 === statusCode foldM_ (getChunk 16 alice) Nothing [15 .. 0 :: Int] where getChunk size alice start n = do - resp <- getConvIds g alice start (Just size) 0 return (Just (Right (last (convList c)))) -getConvIdsFailMaxSize :: Galley -> Brig -> Cannon -> TestSetup -> Http () -getConvIdsFailMaxSize g b _ _ = do - usr <- randomUser b - getConvIds g usr Nothing (Just 1001) !!! +getConvIdsFailMaxSize :: TestM () +getConvIdsFailMaxSize = do + usr <- randomUser + getConvIds usr Nothing (Just 1001) !!! const 400 === statusCode -getConvsPagingOk :: Galley -> Brig -> Cannon -> TestSetup -> Http () -getConvsPagingOk g b _ _ = do - [ally, bill, carl] <- randomUsers b 3 - connectUsers b ally (list1 bill [carl]) - replicateM_ 11 $ postConv g ally [bill, carl] (Just "gossip") [] Nothing Nothing +getConvsPagingOk :: TestM () +getConvsPagingOk = do + [ally, bill, carl] <- randomUsers 3 + connectUsers ally (list1 bill [carl]) + replicateM_ 11 $ postConv ally [bill, carl] (Just "gossip") [] Nothing Nothing walk ally [3,3,3,3,2] -- 11 (group) + 2 (1:1) + 1 (self) walk bill [3,3,3,3,1] -- 11 (group) + 1 (1:1) + 1 (self) walk carl [3,3,3,3,1] -- 11 (group) + 1 (1:1) + 1 (self) where walk u = foldM_ (next u 3) Nothing next u step start n = do - r1 <- getConvIds g u (Right <$> start) (Just step) start) (Just step) decodeBody r1 liftIO $ assertEqual "unexpected length (getConvIds)" (Just n) (length <$> ids1) - r2 <- getConvs g u (Right <$> start) (Just step) start) (Just step) decodeBody r2 liftIO $ assertEqual "unexpected length (getConvs)" (Just n) (length <$> ids3) @@ -564,133 +566,134 @@ getConvsPagingOk g b _ _ = do return $ ids1 >>= listToMaybe . reverse -postConvFailNotConnected :: Galley -> Brig -> Cannon -> TestSetup -> Http () -postConvFailNotConnected g b _ _ = do - alice <- randomUser b - bob <- randomUser b - jane <- randomUser b - postConv g alice [bob, jane] Nothing [] Nothing Nothing !!! do +postConvFailNotConnected :: TestM () +postConvFailNotConnected = do + alice <- randomUser + bob <- randomUser + jane <- randomUser + postConv alice [bob, jane] Nothing [] Nothing Nothing !!! do const 403 === statusCode const (Just "not-connected") === fmap label . decodeBody -postConvFailNumMembers :: Galley -> Brig -> Cannon -> TestSetup -> Http () -postConvFailNumMembers g b _ s = do - let n = fromIntegral (maxConvSize s) - alice <- randomUser b - bob:others <- replicateM n (randomUser b) - connectUsers b alice (list1 bob others) - postConv g alice (bob:others) Nothing [] Nothing Nothing !!! do +postConvFailNumMembers :: TestM () +postConvFailNumMembers = do + n <- fromIntegral <$> view tsMaxConvSize + alice <- randomUser + bob:others <- replicateM n (randomUser) + connectUsers alice (list1 bob others) + postConv alice (bob:others) Nothing [] Nothing Nothing !!! do const 400 === statusCode const (Just "client-error") === fmap label . decodeBody -- | If somebody has blocked a user, that user shouldn't be able to create a -- group conversation which includes them. -postConvFailBlocked :: Galley -> Brig -> Cannon -> TestSetup -> Http () -postConvFailBlocked g b _ _ = do - alice <- randomUser b - bob <- randomUser b - jane <- randomUser b - connectUsers b alice (list1 bob [jane]) - putConnection b jane alice Blocked +postConvFailBlocked :: TestM () +postConvFailBlocked = do + alice <- randomUser + bob <- randomUser + jane <- randomUser + connectUsers alice (list1 bob [jane]) + putConnection jane alice Blocked !!! const 200 === statusCode - postConv g alice [bob, jane] Nothing [] Nothing Nothing !!! do + postConv alice [bob, jane] Nothing [] Nothing Nothing !!! do const 403 === statusCode const (Just "not-connected") === fmap label . decodeBody -postSelfConvOk :: Galley -> Brig -> Cannon -> TestSetup -> Http () -postSelfConvOk g b _ _ = do - alice <- randomUser b - m <- postSelfConv g alice Brig -> Cannon -> TestSetup -> Http () -postO2OConvOk g b _ _ = do - alice <- randomUser b - bob <- randomUser b - connectUsers b alice (singleton bob) - a <- postO2OConv g alice bob (Just "chat") Brig -> Cannon -> TestSetup -> Http () -postConvO2OFailWithSelf g b _ _ = do - alice <- randomUser b +postConvO2OFailWithSelf :: TestM () +postConvO2OFailWithSelf = do + g <- view tsGalley + alice <- randomUser let inv = NewConvUnmanaged (NewConv [alice] Nothing mempty Nothing Nothing Nothing Nothing) post (g . path "/conversations/one2one" . zUser alice . zConn "conn" . zType "access" . json inv) !!! do const 403 === statusCode const (Just "invalid-op") === fmap label . decodeBody -postConnectConvOk :: Galley -> Brig -> Cannon -> TestSetup -> Http () -postConnectConvOk g b _ _ = do - alice <- randomUser b - bob <- randomUser b - m <- postConnectConv g alice bob "Alice" "connect with me!" Nothing Brig -> Cannon -> TestSetup -> Http () -postConnectConvOk2 g b _ _ = do - alice <- randomUser b - bob <- randomUser b +postConnectConvOk2 :: TestM () +postConnectConvOk2 = do + alice <- randomUser + bob <- randomUser m <- decodeConvId <$> request alice bob n <- decodeConvId <$> request alice bob liftIO $ m @=? n where request alice bob = - postConnectConv g alice bob "Alice" "connect with me!" (Just "me@me.com") - -putConvAcceptOk :: Galley -> Brig -> Cannon -> TestSetup -> Http () -putConvAcceptOk g b _ _ = do - alice <- randomUser b - bob <- randomUser b - cnv <- decodeConvId <$> postConnectConv g alice bob "Alice" "come to zeta!" Nothing - putConvAccept g bob cnv !!! const 200 === statusCode - getConv g alice cnv !!! do + postConnectConv alice bob "Alice" "connect with me!" (Just "me@me.com") + +putConvAcceptOk :: TestM () +putConvAcceptOk = do + alice <- randomUser + bob <- randomUser + cnv <- decodeConvId <$> postConnectConv alice bob "Alice" "come to zeta!" Nothing + putConvAccept bob cnv !!! const 200 === statusCode + getConv alice cnv !!! do const 200 === statusCode const (Just One2OneConv) === fmap cnvType . decodeBody - getConv g bob cnv !!! do + getConv bob cnv !!! do const 200 === statusCode const (Just One2OneConv) === fmap cnvType . decodeBody -putConvAcceptRetry :: Galley -> Brig -> Cannon -> TestSetup -> Http () -putConvAcceptRetry g b _ _ = do - alice <- randomUser b - bob <- randomUser b - connectUsers b alice (singleton bob) - cnv <- decodeConvId <$> postO2OConv g alice bob (Just "chat") +putConvAcceptRetry :: TestM () +putConvAcceptRetry = do + alice <- randomUser + bob <- randomUser + connectUsers alice (singleton bob) + cnv <- decodeConvId <$> postO2OConv alice bob (Just "chat") -- If the conversation type is already One2One, everything is 200 OK - putConvAccept g bob cnv !!! const 200 === statusCode + putConvAccept bob cnv !!! const 200 === statusCode -postMutualConnectConvOk :: Galley -> Brig -> Cannon -> TestSetup -> Http () -postMutualConnectConvOk g b _ _ = do - alice <- randomUser b - bob <- randomUser b - ac <- postConnectConv g alice bob "A" "a" Nothing Brig -> Cannon -> TestSetup -> Http () -postRepeatConnectConvCancel g b _ _ = do - alice <- randomUser b - bob <- randomUser b +postRepeatConnectConvCancel :: TestM () +postRepeatConnectConvCancel = do + alice <- randomUser + bob <- randomUser -- Alice wants to connect - rsp1 <- postConnectConv g alice bob "A" "a" Nothing getConv g bob (cnvId cnv) + putConvAccept bob (cnvId cnv) !!! const 200 === statusCode + cnvX <- decodeBody' "conversation" <$> getConv bob (cnvId cnv) liftIO $ do ConnectConv @=? cnvType cnvX (Just "B") @=? cnvName cnvX privateAccess @=? cnvAccess cnvX -- Alice accepts, finally turning it into a 1-1 - putConvAccept g alice (cnvId cnv) !!! const 200 === statusCode - cnv4 <- decodeBody' "conversation" <$> getConv g alice (cnvId cnv) + putConvAccept alice (cnvId cnv) !!! const 200 === statusCode + cnv4 <- decodeBody' "conversation" <$> getConv alice (cnvId cnv) liftIO $ do One2OneConv @=? cnvType cnv4 (Just "B") @=? cnvName cnv4 privateAccess @=? cnvAccess cnv4 where cancel u c = do + g <- view tsGalley put (g . paths ["/i/conversations", toByteString' (cnvId c), "block"] . zUser u) !!! const 200 === statusCode - getConv g u (cnvId c) !!! const 404 === statusCode + getConv u (cnvId c) !!! const 404 === statusCode -putBlockConvOk :: Galley -> Brig -> Cannon -> TestSetup -> Http () -putBlockConvOk g b _ _ = do - alice <- randomUser b - bob <- randomUser b - conv <- decodeBody' "conversation" <$> postConnectConv g alice bob "Alice" "connect with me!" (Just "me@me.com") +putBlockConvOk :: TestM () +putBlockConvOk = do + g <- view tsGalley + alice <- randomUser + bob <- randomUser + conv <- decodeBody' "conversation" <$> postConnectConv alice bob "Alice" "connect with me!" (Just "me@me.com") - getConv g alice (cnvId conv) !!! const 200 === statusCode - getConv g bob (cnvId conv) !!! const 404 === statusCode + getConv alice (cnvId conv) !!! const 200 === statusCode + getConv bob (cnvId conv) !!! const 404 === statusCode put (g . paths ["/i/conversations", toByteString' (cnvId conv), "block"] . zUser bob) !!! const 200 === statusCode -- A is still the only member of the 1-1 - getConv g alice (cnvId conv) !!! do + getConv alice (cnvId conv) !!! do const 200 === statusCode const (cnvMembers conv) === cnvMembers . decodeBody' "conversation" -- B accepts the conversation by unblocking put (g . paths ["/i/conversations", toByteString' (cnvId conv), "unblock"] . zUser bob) !!! const 200 === statusCode - getConv g bob (cnvId conv) !!! const 200 === statusCode + getConv bob (cnvId conv) !!! const 200 === statusCode -- B blocks A in the 1-1 put (g . paths ["/i/conversations", toByteString' (cnvId conv), "block"] . zUser bob) !!! const 200 === statusCode -- B no longer sees the 1-1 - getConv g bob (cnvId conv) !!! const 404 === statusCode + getConv bob (cnvId conv) !!! const 404 === statusCode -- B unblocks A in the 1-1 put (g . paths ["/i/conversations", toByteString' (cnvId conv), "unblock"] . zUser bob) !!! const 200 === statusCode -- B sees the blocked 1-1 again - getConv g bob (cnvId conv) !!! do + getConv bob (cnvId conv) !!! do const 200 === statusCode -getConvOk :: Galley -> Brig -> Cannon -> TestSetup -> Http () -getConvOk g b _ _ = do - alice <- randomUser b - bob <- randomUser b - chuck <- randomUser b - connectUsers b alice (list1 bob [chuck]) - conv <- decodeConvId <$> postConv g alice [bob, chuck] (Just "gossip") [] Nothing Nothing - getConv g alice conv !!! const 200 === statusCode - getConv g bob conv !!! const 200 === statusCode - getConv g chuck conv !!! const 200 === statusCode - -accessConvMeta :: Galley -> Brig -> Cannon -> TestSetup -> Http () -accessConvMeta g b _ _ = do - alice <- randomUser b - bob <- randomUser b - chuck <- randomUser b - connectUsers b alice (list1 bob [chuck]) - conv <- decodeConvId <$> postConv g alice [bob, chuck] (Just "gossip") [] Nothing Nothing +getConvOk :: TestM () +getConvOk = do + alice <- randomUser + bob <- randomUser + chuck <- randomUser + connectUsers alice (list1 bob [chuck]) + conv <- decodeConvId <$> postConv alice [bob, chuck] (Just "gossip") [] Nothing Nothing + getConv alice conv !!! const 200 === statusCode + getConv bob conv !!! const 200 === statusCode + getConv chuck conv !!! const 200 === statusCode + +accessConvMeta :: TestM () +accessConvMeta = do + g <- view tsGalley + alice <- randomUser + bob <- randomUser + chuck <- randomUser + connectUsers alice (list1 bob [chuck]) + conv <- decodeConvId <$> postConv alice [bob, chuck] (Just "gossip") [] Nothing Nothing let meta = ConversationMeta conv RegularConv alice [InviteAccess] ActivatedAccessRole (Just "gossip") Nothing Nothing Nothing get (g . paths ["i/conversations", toByteString' conv, "meta"] . zUser alice) !!! do const 200 === statusCode const (Just meta) === (decode <=< responseBody) -leaveConnectConversation :: Galley -> Brig -> Cannon -> TestSetup -> Http () -leaveConnectConversation g b _ _ = do - alice <- randomUser b - bob <- randomUser b - bdy <- postConnectConv g alice bob "alice" "ni" Nothing decodeBody bdy) - deleteMember g alice alice c !!! const 403 === statusCode - -postMembersOk :: Galley -> Brig -> Cannon -> TestSetup -> Http () -postMembersOk g b _ _ = do - alice <- randomUser b - bob <- randomUser b - chuck <- randomUser b - eve <- randomUser b - connectUsers b alice (list1 bob [chuck, eve]) - connectUsers b eve (singleton bob) - conv <- decodeConvId <$> postConv g alice [bob, chuck] (Just "gossip") [] Nothing Nothing - postMembers g alice (singleton eve) conv !!! const 200 === statusCode + deleteMember alice alice c !!! const 403 === statusCode + +postMembersOk :: TestM () +postMembersOk = do + alice <- randomUser + bob <- randomUser + chuck <- randomUser + eve <- randomUser + connectUsers alice (list1 bob [chuck, eve]) + connectUsers eve (singleton bob) + conv <- decodeConvId <$> postConv alice [bob, chuck] (Just "gossip") [] Nothing Nothing + postMembers alice (singleton eve) conv !!! const 200 === statusCode -- Check that last_event markers are set for all members forM_ [alice, bob, chuck, eve] $ \u -> do - _ <- getSelfMember g u conv Brig -> Cannon -> TestSetup -> Http () -postMembersOk2 g b _ _ = do - alice <- randomUser b - bob <- randomUser b - chuck <- randomUser b - connectUsers b alice (list1 bob [chuck]) - connectUsers b bob (singleton chuck) - conv <- decodeConvId <$> postConv g alice [bob, chuck] Nothing [] Nothing Nothing - postMembers g bob (singleton chuck) conv !!! const 204 === statusCode - chuck' <- decodeBody <$> (getSelfMember g chuck conv postConv alice [bob, chuck] Nothing [] Nothing Nothing + postMembers bob (singleton chuck) conv !!! const 204 === statusCode + chuck' <- decodeBody <$> (getSelfMember chuck conv chuck') (Just chuck) -postMembersOk3 :: Galley -> Brig -> Cannon -> TestSetup -> Http () -postMembersOk3 g b _ _ = do - alice <- randomUser b - bob <- randomUser b - eve <- randomUser b - connectUsers b alice (list1 bob [eve]) - conv <- decodeConvId <$> postConv g alice [bob, eve] (Just "gossip") [] Nothing Nothing +postMembersOk3 :: TestM () +postMembersOk3 = do + alice <- randomUser + bob <- randomUser + eve <- randomUser + connectUsers alice (list1 bob [eve]) + conv <- decodeConvId <$> postConv alice [bob, eve] (Just "gossip") [] Nothing Nothing -- Bob leaves - deleteMember g bob bob conv !!! const 200 === statusCode + deleteMember bob bob conv !!! const 200 === statusCode -- Fetch bob - getSelfMember g bob conv !!! const 200 === statusCode + getSelfMember bob conv !!! const 200 === statusCode -- Alice re-adds Bob to the conversation - postMembers g alice (singleton bob) conv !!! const 200 === statusCode + postMembers alice (singleton bob) conv !!! const 200 === statusCode -- Fetch bob again - getSelfMember g bob conv !!! const 200 === statusCode - -postMembersFail :: Galley -> Brig -> Cannon -> TestSetup -> Http () -postMembersFail g b _ _ = do - alice <- randomUser b - bob <- randomUser b - chuck <- randomUser b - dave <- randomUser b - eve <- randomUser b - connectUsers b alice (list1 bob [chuck, eve]) - connectUsers b eve (singleton bob) - conv <- decodeConvId <$> postConv g alice [bob, chuck] (Just "gossip") [] Nothing Nothing - postMembers g eve (singleton bob) conv !!! const 404 === statusCode - postMembers g alice (singleton eve) conv !!! const 200 === statusCode + getSelfMember bob conv !!! const 200 === statusCode + +postMembersFail :: TestM () +postMembersFail = do + alice <- randomUser + bob <- randomUser + chuck <- randomUser + dave <- randomUser + eve <- randomUser + connectUsers alice (list1 bob [chuck, eve]) + connectUsers eve (singleton bob) + conv <- decodeConvId <$> postConv alice [bob, chuck] (Just "gossip") [] Nothing Nothing + postMembers eve (singleton bob) conv !!! const 404 === statusCode + postMembers alice (singleton eve) conv !!! const 200 === statusCode -- Not connected but already there - postMembers g chuck (singleton eve) conv !!! const 204 === statusCode - postMembers g chuck (singleton dave) conv !!! do + postMembers chuck (singleton eve) conv !!! const 204 === statusCode + postMembers chuck (singleton dave) conv !!! do const 403 === statusCode const (Just "not-connected") === fmap label . decodeBody - void $ connectUsers b chuck (singleton dave) - postMembers g chuck (singleton dave) conv !!! const 200 === statusCode - postMembers g chuck (singleton dave) conv !!! const 204 === statusCode - -postTooManyMembersFail :: Galley -> Brig -> Cannon -> TestSetup -> Http () -postTooManyMembersFail g b _ s = do - let n = fromIntegral (maxConvSize s) - alice <- randomUser b - bob <- randomUser b - chuck <- randomUser b - connectUsers b alice (list1 bob [chuck]) - conv <- decodeConvId <$> postConv g alice [bob, chuck] (Just "gossip") [] Nothing Nothing - x:xs <- randomUsers b (n - 2) - postMembers g chuck (list1 x xs) conv !!! do + void $ connectUsers chuck (singleton dave) + postMembers chuck (singleton dave) conv !!! const 200 === statusCode + postMembers chuck (singleton dave) conv !!! const 204 === statusCode + +postTooManyMembersFail :: TestM () +postTooManyMembersFail = do + n <- fromIntegral <$> view tsMaxConvSize + alice <- randomUser + bob <- randomUser + chuck <- randomUser + connectUsers alice (list1 bob [chuck]) + conv <- decodeConvId <$> postConv alice [bob, chuck] (Just "gossip") [] Nothing Nothing + x:xs <- randomUsers (n - 2) + postMembers chuck (list1 x xs) conv !!! do const 403 === statusCode const (Just "too-many-members") === fmap label . decodeBody -deleteMembersOk :: Galley -> Brig -> Cannon -> TestSetup -> Http () -deleteMembersOk g b _ _ = do - alice <- randomUser b - bob <- randomUser b - eve <- randomUser b - connectUsers b alice (list1 bob [eve]) - conv <- decodeConvId <$> postConv g alice [bob, eve] (Just "gossip") [] Nothing Nothing - deleteMember g bob bob conv !!! const 200 === statusCode - deleteMember g bob bob conv !!! const 404 === statusCode - deleteMember g alice eve conv !!! const 200 === statusCode - deleteMember g alice eve conv !!! const 204 === statusCode - deleteMember g alice alice conv !!! const 200 === statusCode - deleteMember g alice alice conv !!! const 404 === statusCode - -deleteMembersFailSelf :: Galley -> Brig -> Cannon -> TestSetup -> Http () -deleteMembersFailSelf g b _ _ = do - alice <- randomUser b - self <- decodeConvId <$> postSelfConv g alice - deleteMember g alice alice self !!! const 403 === statusCode - -deleteMembersFailO2O :: Galley -> Brig -> Cannon -> TestSetup -> Http () -deleteMembersFailO2O g b _ _ = do - alice <- randomUser b - bob <- randomUser b - connectUsers b alice (singleton bob) - o2o <- decodeConvId <$> postO2OConv g alice bob (Just "foo") - deleteMember g alice bob o2o !!! const 403 === statusCode - -putConvRenameOk :: Galley -> Brig -> Cannon -> TestSetup -> Http () -putConvRenameOk g b c _ = do - alice <- randomUser b - bob <- randomUser b - connectUsers b alice (singleton bob) - conv <- decodeConvId <$> postO2OConv g alice bob (Just "gossip") +deleteMembersOk :: TestM () +deleteMembersOk = do + alice <- randomUser + bob <- randomUser + eve <- randomUser + connectUsers alice (list1 bob [eve]) + conv <- decodeConvId <$> postConv alice [bob, eve] (Just "gossip") [] Nothing Nothing + deleteMember bob bob conv !!! const 200 === statusCode + deleteMember bob bob conv !!! const 404 === statusCode + deleteMember alice eve conv !!! const 200 === statusCode + deleteMember alice eve conv !!! const 204 === statusCode + deleteMember alice alice conv !!! const 200 === statusCode + deleteMember alice alice conv !!! const 404 === statusCode + +deleteMembersFailSelf :: TestM () +deleteMembersFailSelf = do + alice <- randomUser + self <- decodeConvId <$> postSelfConv alice + deleteMember alice alice self !!! const 403 === statusCode + +deleteMembersFailO2O :: TestM () +deleteMembersFailO2O = do + alice <- randomUser + bob <- randomUser + connectUsers alice (singleton bob) + o2o <- decodeConvId <$> postO2OConv alice bob (Just "foo") + deleteMember alice bob o2o !!! const 403 === statusCode + +putConvRenameOk :: TestM () +putConvRenameOk = do + g <- view tsGalley + c <- view tsCannon + alice <- randomUser + bob <- randomUser + connectUsers alice (singleton bob) + conv <- decodeConvId <$> postO2OConv alice bob (Just "gossip") WS.bracketR2 c alice bob $ \(wsA, wsB) -> do let update = ConversationRename "gossip++" put ( g @@ -943,23 +951,23 @@ putConvRenameOk g b c _ = do evtFrom e @?= bob evtData e @?= Just (EdConvRename (ConversationRename "gossip++")) -putMemberOtrMuteOk :: Galley -> Brig -> Cannon -> TestSetup -> Http () -putMemberOtrMuteOk g b c _ = do - putMemberOk (memberUpdate { mupOtrMute = Just True, mupOtrMuteStatus = Just 0, mupOtrMuteRef = Just "ref" }) g b c - putMemberOk (memberUpdate { mupOtrMute = Just False }) g b c +putMemberOtrMuteOk :: TestM () +putMemberOtrMuteOk = do + putMemberOk (memberUpdate { mupOtrMute = Just True, mupOtrMuteStatus = Just 0, mupOtrMuteRef = Just "ref" }) + putMemberOk (memberUpdate { mupOtrMute = Just False }) -putMemberOtrArchiveOk :: Galley -> Brig -> Cannon -> TestSetup -> Http () -putMemberOtrArchiveOk g b c _ = do - putMemberOk (memberUpdate { mupOtrArchive = Just True, mupOtrArchiveRef = Just "ref" }) g b c - putMemberOk (memberUpdate { mupOtrArchive = Just False }) g b c +putMemberOtrArchiveOk :: TestM () +putMemberOtrArchiveOk = do + putMemberOk (memberUpdate { mupOtrArchive = Just True, mupOtrArchiveRef = Just "ref" }) + putMemberOk (memberUpdate { mupOtrArchive = Just False }) -putMemberHiddenOk :: Galley -> Brig -> Cannon -> TestSetup -> Http () -putMemberHiddenOk g b c _ = do - putMemberOk (memberUpdate { mupHidden = Just True, mupHiddenRef = Just "ref" }) g b c - putMemberOk (memberUpdate { mupHidden = Just False }) g b c +putMemberHiddenOk :: TestM () +putMemberHiddenOk = do + putMemberOk (memberUpdate { mupHidden = Just True, mupHiddenRef = Just "ref" }) + putMemberOk (memberUpdate { mupHidden = Just False }) -putMemberAllOk :: Galley -> Brig -> Cannon -> TestSetup -> Http () -putMemberAllOk g b c _ = putMemberOk +putMemberAllOk :: TestM () +putMemberAllOk = putMemberOk (memberUpdate { mupOtrMute = Just True , mupOtrMuteStatus = Just 0 @@ -968,15 +976,16 @@ putMemberAllOk g b c _ = putMemberOk , mupOtrArchiveRef = Just "aref" , mupHidden = Just True , mupHiddenRef = Just "href" - }) g b c + }) -putMemberOk :: MemberUpdate -> Galley -> Brig -> Cannon -> Http () -putMemberOk update g b ca = do - alice <- randomUser b - bob <- randomUser b - connectUsers b alice (singleton bob) - conv <- decodeConvId <$> postO2OConv g alice bob (Just "gossip") - getConv g alice conv !!! const 200 === statusCode +putMemberOk :: MemberUpdate -> TestM () +putMemberOk update = do + c <- view tsCannon + alice <- randomUser + bob <- randomUser + connectUsers alice (singleton bob) + conv <- decodeConvId <$> postO2OConv alice bob (Just "gossip") + getConv alice conv !!! const 200 === statusCode -- Expected member state let memberBob = Member @@ -992,8 +1001,8 @@ putMemberOk update g b ca = do } -- Update member state & verify push notification - WS.bracketR ca bob $ \ws -> do - putMember g bob update conv !!! const 200 === statusCode + WS.bracketR c bob $ \ws -> do + putMember bob update conv !!! const 200 === statusCode void. liftIO $ WS.assertMatch (5 # Second) ws $ \n -> do let e = List1.head (WS.unpackPayload n) ntfTransient n @?= False @@ -1011,7 +1020,7 @@ putMemberOk update g b ca = do x -> assertFailure $ "Unexpected event data: " ++ show x -- Verify new member state - rs <- getConv g bob conv decodeBody rs liftIO $ do assertBool "user" (isJust bob') @@ -1024,16 +1033,18 @@ putMemberOk update g b ca = do assertEqual "hidden" (memHidden memberBob) (memHidden newBob) assertEqual "hidden__ref" (memHiddenRef memberBob) (memHiddenRef newBob) -putReceiptModeOk :: Galley -> Brig -> Cannon -> TestSetup -> Http () -putReceiptModeOk g b c _ = do - alice <- randomUser b - bob <- randomUser b - jane <- randomUser b - connectUsers b alice (list1 bob [jane]) - cnv <- decodeConvId <$> postConv g alice [bob, jane] (Just "gossip") [] Nothing Nothing +putReceiptModeOk :: TestM () +putReceiptModeOk = do + g <- view tsGalley + c <- view tsCannon + alice <- randomUser + bob <- randomUser + jane <- randomUser + connectUsers alice (list1 bob [jane]) + cnv <- decodeConvId <$> postConv alice [bob, jane] (Just "gossip") [] Nothing Nothing WS.bracketR3 c alice bob jane $ \(_wsA, wsB, _wsJ) -> do -- By default, nothing is set - getConv g alice cnv !!! do + getConv alice cnv !!! do const 200 === statusCode const (Just Nothing) === fmap cnvReceiptMode . decodeBody @@ -1047,7 +1058,7 @@ putReceiptModeOk g b c _ = do ) !!! const 200 === statusCode -- Ensure the field is properly set - getConv g alice cnv !!! do + getConv alice cnv !!! do const 200 === statusCode const (Just $ Just (ReceiptMode 0)) === fmap cnvReceiptMode . decodeBody @@ -1065,12 +1076,12 @@ putReceiptModeOk g b c _ = do WS.assertNoEvent (1 # Second) [wsB] -- Ensure that the new field remains unchanged - getConv g alice cnv !!! do + getConv alice cnv !!! do const 200 === statusCode const (Just $ Just (ReceiptMode 0)) === fmap cnvReceiptMode . decodeBody - cnv' <- decodeConvId <$> postConvWithReceipt g alice [bob, jane] (Just "gossip") [] Nothing Nothing (ReceiptMode 0) - getConv g alice cnv' !!! do + cnv' <- decodeConvId <$> postConvWithReceipt alice [bob, jane] (Just "gossip") [] Nothing Nothing (ReceiptMode 0) + getConv alice cnv' !!! do const 200 === statusCode const (Just (Just (ReceiptMode 0))) === fmap cnvReceiptMode . decodeBody where @@ -1085,12 +1096,13 @@ putReceiptModeOk g b c _ = do -> assertEqual "modes should match" mode 0 _ -> assertFailure "Unexpected event data" -postTypingIndicators :: Galley -> Brig -> Cannon -> TestSetup -> Http () -postTypingIndicators g b _ _ = do - alice <- randomUser b - bob <- randomUser b - connectUsers b alice (singleton bob) - conv <- decodeConvId <$> postO2OConv g alice bob Nothing +postTypingIndicators :: TestM () +postTypingIndicators = do + g <- view tsGalley + alice <- randomUser + bob <- randomUser + connectUsers alice (singleton bob) + conv <- decodeConvId <$> postO2OConv alice bob Nothing post ( g . paths ["conversations", toByteString' conv, "typing"] . zUser bob @@ -1108,25 +1120,26 @@ postTypingIndicators g b _ _ = do ) !!! const 400 === statusCode -removeUser :: Galley -> Brig -> Cannon -> TestSetup -> Http () -removeUser g b ca _ = do - alice <- randomUser b - bob <- randomUser b - carl <- randomUser b - connectUsers b alice (list1 bob [carl]) - conv1 <- decodeConvId <$> postConv g alice [bob] (Just "gossip") [] Nothing Nothing - conv2 <- decodeConvId <$> postConv g alice [bob, carl] (Just "gossip2") [] Nothing Nothing - conv3 <- decodeConvId <$> postConv g alice [carl] (Just "gossip3") [] Nothing Nothing - WS.bracketR3 ca alice bob carl $ \(wsA, wsB, wsC) -> do - deleteUser g bob +removeUser :: TestM () +removeUser = do + c <- view tsCannon + alice <- randomUser + bob <- randomUser + carl <- randomUser + connectUsers alice (list1 bob [carl]) + conv1 <- decodeConvId <$> postConv alice [bob] (Just "gossip") [] Nothing Nothing + conv2 <- decodeConvId <$> postConv alice [bob, carl] (Just "gossip2") [] Nothing Nothing + conv3 <- decodeConvId <$> postConv alice [carl] (Just "gossip3") [] Nothing Nothing + WS.bracketR3 c alice bob carl $ \(wsA, wsB, wsC) -> do + deleteUser bob void . liftIO $ WS.assertMatchN (5 # Second) [wsA, wsB] $ matchMemberLeave conv1 bob void . liftIO $ WS.assertMatchN (5 # Second) [wsA, wsB, wsC] $ matchMemberLeave conv2 bob -- Check memberships - mems1 <- fmap cnvMembers . decodeBody <$> getConv g alice conv1 - mems2 <- fmap cnvMembers . decodeBody <$> getConv g alice conv2 - mems3 <- fmap cnvMembers . decodeBody <$> getConv g alice conv3 + mems1 <- fmap cnvMembers . decodeBody <$> getConv alice conv1 + mems2 <- fmap cnvMembers . decodeBody <$> getConv alice conv2 + mems3 <- fmap cnvMembers . decodeBody <$> getConv alice conv3 let other u = find ((== u) . omId) . cmOthers liftIO $ do (mems1 >>= other bob) @?= Nothing diff --git a/services/galley/test/integration/API/MessageTimer.hs b/services/galley/test/integration/API/MessageTimer.hs index 5f05907e6a0..d70d73d532a 100644 --- a/services/galley/test/integration/API/MessageTimer.hs +++ b/services/galley/test/integration/API/MessageTimer.hs @@ -9,21 +9,13 @@ import Data.Misc import Galley.Types import Network.Wai.Utilities.Error import Test.Tasty -import Test.Tasty.Cannon (Cannon, TimeoutUnit (..), (#)) -import Test.Tasty.HUnit +import Test.Tasty.Cannon (TimeoutUnit (..), (#)) +import TestSetup +import Control.Lens (view) import qualified Galley.Types.Teams as Teams import qualified Test.Tasty.Cannon as WS -type TestSignature a = Galley -> Brig -> Cannon -> TestSetup -> Http a - -test :: IO TestSetup -> TestName -> TestSignature a -> TestTree -test s n t = testCase n runTest - where - runTest = do - setup <- s - (void $ runHttpT (manager setup) (t (galley setup) (brig setup) (cannon setup) setup)) - tests :: IO TestSetup -> TestTree tests s = testGroup "Per-conversation message timer" [ testGroup "timer can be set at creation time" @@ -37,92 +29,93 @@ tests s = testGroup "Per-conversation message timer" messageTimerInit :: Maybe Milliseconds -- ^ Timer value - -> Galley -> Brig -> Cannon -> TestSetup -> Http () -messageTimerInit mtimer g b _ca _ = do + -> TestM () +messageTimerInit mtimer = do -- Create a conversation with a timer - [alice, bob, jane] <- randomUsers b 3 - connectUsers b alice (list1 bob [jane]) - rsp <- postConv g alice [bob, jane] Nothing [] Nothing mtimer Brig -> Cannon -> TestSetup -> Http () -messageTimerChange g b _ca _ = do +messageTimerChange :: TestM () +messageTimerChange = do -- Create a conversation without a timer - [alice, bob, jane] <- randomUsers b 3 - connectUsers b alice (list1 bob [jane]) - rsp <- postConv g alice [bob, jane] Nothing [] Nothing Nothing Brig -> Cannon -> TestSetup -> Http () -messageTimerChangeGuest g b _ca _ = do +messageTimerChangeGuest :: TestM () +messageTimerChangeGuest = do -- Create a team and a guest user - [owner, member, guest] <- randomUsers b 3 - connectUsers b owner (list1 member [guest]) - tid <- createTeam g "team" owner [Teams.newTeamMember member Teams.fullPermissions Nothing] + [owner, member, guest] <- randomUsers 3 + connectUsers owner (list1 member [guest]) + tid <- createTeam "team" owner [Teams.newTeamMember member Teams.fullPermissions Nothing] -- Create a conversation - cid <- createTeamConv g owner tid [member, guest] Nothing Nothing Nothing + cid <- createTeamConv owner tid [member, guest] Nothing Nothing Nothing -- Try to change the timer (as the guest user) and observe failure - putMessageTimerUpdate g guest cid (ConversationMessageTimerUpdate timer1sec) !!! do + putMessageTimerUpdate guest cid (ConversationMessageTimerUpdate timer1sec) !!! do const 403 === statusCode const "access-denied" === (label . decodeBody' "error label") - getConv g guest cid !!! + getConv guest cid !!! const Nothing === (cnvMessageTimer <=< decodeBody) -- Try to change the timer (as a team member) and observe success - putMessageTimerUpdate g member cid (ConversationMessageTimerUpdate timer1sec) !!! + putMessageTimerUpdate member cid (ConversationMessageTimerUpdate timer1sec) !!! const 200 === statusCode - getConv g guest cid !!! + getConv guest cid !!! const timer1sec === (cnvMessageTimer <=< decodeBody) -messageTimerChangeO2O :: Galley -> Brig -> Cannon -> TestSetup -> Http () -messageTimerChangeO2O g b _ca _ = do +messageTimerChangeO2O :: TestM () +messageTimerChangeO2O = do -- Create a 1:1 conversation - [alice, bob] <- randomUsers b 2 - connectUsers b alice (singleton bob) - rsp <- postO2OConv g alice bob Nothing Brig -> Cannon -> TestSetup -> Http () -messageTimerEvent g b ca _ = do +messageTimerEvent :: TestM () +messageTimerEvent = do + ca <- view tsCannon -- Create a conversation - [alice, bob] <- randomUsers b 2 - connectUsers b alice (singleton bob) - rsp <- postConv g alice [bob] Nothing [] Nothing Nothing do let update = ConversationMessageTimerUpdate timer1sec - putMessageTimerUpdate g alice cid update !!! + putMessageTimerUpdate alice cid update !!! const 200 === statusCode void . liftIO $ WS.assertMatchN (5 # Second) [wsA, wsB] $ wsAssertConvMessageTimerUpdate cid alice update diff --git a/services/galley/test/integration/API/SQS.hs b/services/galley/test/integration/API/SQS.hs index 029bafff47c..6ab9af16312 100644 --- a/services/galley/test/integration/API/SQS.hs +++ b/services/galley/test/integration/API/SQS.hs @@ -22,6 +22,7 @@ import Proto.TeamEvents as E import Proto.TeamEvents_Fields as E import System.Logger.Class import Test.Tasty.HUnit +import TestSetup import qualified Data.ByteString.Base64 as B64 import qualified Data.Currency as Currency @@ -33,22 +34,28 @@ import qualified Network.AWS.SQS as SQS import qualified OpenSSL.X509.SystemStore as Ssl import qualified System.Logger as L -ensureQueueEmpty :: MonadIO m => Maybe Aws.Env -> m () -ensureQueueEmpty (Just env) = liftIO $ Aws.execute env purgeQueue -ensureQueueEmpty Nothing = return () +ensureQueueEmpty :: TestM () +ensureQueueEmpty = view tsAwsEnv >>= ensureQueueEmptyIO -assertQueue :: MonadIO m => String -> Maybe Aws.Env -> (String -> Maybe E.TeamEvent -> IO ()) -> m () -assertQueue label (Just env) check = liftIO $ Aws.execute env $ fetchMessage label check -assertQueue _ Nothing _ = return () +ensureQueueEmptyIO :: MonadIO m => Maybe Aws.Env -> m () +ensureQueueEmptyIO (Just env) = liftIO $ Aws.execute env purgeQueue +ensureQueueEmptyIO Nothing = return () --- Try to assert an event in the queue for a `timeout` amount of seconds -tryAssertQueue :: MonadIO m => Int -> String -> Maybe Aws.Env -> (String -> Maybe E.TeamEvent -> IO ()) -> m () -tryAssertQueue timeout label (Just env) check = liftIO $ Aws.execute env $ awaitMessage label timeout check -tryAssertQueue _ _ Nothing _ = return () +assertQueue :: String -> (String -> Maybe E.TeamEvent -> IO ()) -> TestM () +assertQueue label check = view tsAwsEnv >>= \case + Just env -> liftIO $ Aws.execute env $ fetchMessage label check + Nothing -> return () -assertQueueEmpty :: (HasCallStack, MonadIO m) => Maybe Aws.Env -> m () -assertQueueEmpty (Just env) = liftIO $ Aws.execute env ensureNoMessages -assertQueueEmpty Nothing = return () +-- Try to assert an event in the queue for a `timeout` amount of seconds +tryAssertQueue :: Int -> String -> (String -> Maybe E.TeamEvent -> IO ()) -> TestM () +tryAssertQueue timeout label check = view tsAwsEnv >>= \case + Just env -> liftIO $ Aws.execute env $ awaitMessage label timeout check + Nothing -> return () + +assertQueueEmpty :: (HasCallStack) => TestM () +assertQueueEmpty = view tsAwsEnv >>= \case + Just env -> liftIO $ Aws.execute env ensureNoMessages + Nothing -> return () tActivateWithCurrency :: HasCallStack => Maybe Currency.Alpha -> String -> Maybe E.TeamEvent -> IO () tActivateWithCurrency c l (Just e) = do diff --git a/services/galley/test/integration/API/Teams.hs b/services/galley/test/integration/API/Teams.hs index afb9c37c479..b03d08e2e35 100644 --- a/services/galley/test/integration/API/Teams.hs +++ b/services/galley/test/integration/API/Teams.hs @@ -17,8 +17,9 @@ import Galley.Types.Teams import Galley.Types.Teams.Intra import Gundeck.Types.Notification import Test.Tasty -import Test.Tasty.Cannon (Cannon, TimeoutUnit (..), (#)) +import Test.Tasty.Cannon (TimeoutUnit (..), (#)) import Test.Tasty.HUnit +import TestSetup (test, TestSetup, TestM, tsCannon, tsGalley) import API.SQS import UnliftIO (mapConcurrently, mapConcurrently_) @@ -31,16 +32,6 @@ import qualified Data.UUID as UUID import qualified Galley.Types as Conv import qualified Network.Wai.Utilities.Error as Error import qualified Test.Tasty.Cannon as WS -import qualified Galley.Aws as Aws - -type TestSignature a = Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http a - -test :: IO TestSetup -> TestName -> TestSignature a -> TestTree -test s n h = testCase n runTest - where - runTest = do - setup <- s - void $ runHttpT (manager setup) (h (galley setup) (brig setup) (cannon setup) (awsEnv setup)) tests :: IO TestSetup -> TestTree tests s = testGroup "Teams API" @@ -81,13 +72,14 @@ tests s = testGroup "Teams API" timeout :: WS.Timeout timeout = 3 # Second -testCreateTeam :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testCreateTeam g b c a = do - owner <- Util.randomUser b +testCreateTeam :: TestM () +testCreateTeam = do + c <- view tsCannon + owner <- Util.randomUser WS.bracketR c owner $ \wsOwner -> do - tid <- Util.createTeam g "foo" owner [] - team <- Util.getTeam g owner tid - assertQueueEmpty a + tid <- Util.createTeam "foo" owner [] + team <- Util.getTeam owner tid + assertQueueEmpty liftIO $ do assertEqual "owner" owner (team^.teamCreator) eventChecks <- WS.awaitMatch timeout wsOwner $ \notif -> do @@ -98,46 +90,48 @@ testCreateTeam g b c a = do e^.eventData @?= Just (EdTeamCreate team) void $ WS.assertSuccess eventChecks -testCreateMulitpleBindingTeams :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testCreateMulitpleBindingTeams g b _ a = do - owner <- Util.randomUser b - _ <- Util.createTeamInternal g "foo" owner - assertQueue "create team" a tActivate +testCreateMulitpleBindingTeams :: TestM () +testCreateMulitpleBindingTeams = do + g <- view tsGalley + owner <- Util.randomUser + _ <- Util.createTeamInternal "foo" owner + assertQueue "create team" tActivate -- Cannot create more teams if bound (used internal API) let nt = NonBindingNewTeam $ newNewTeam (unsafeRange "owner") (unsafeRange "icon") post (g . path "/teams" . zUser owner . zConn "conn" . json nt) !!! const 403 === statusCode -- If never used the internal API, can create multiple teams - owner' <- Util.randomUser b - void $ Util.createTeam g "foo" owner' [] - void $ Util.createTeam g "foo" owner' [] - -testCreateBindingTeamWithCurrency :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testCreateBindingTeamWithCurrency g b _ a = do - _owner <- Util.randomUser b - _ <- Util.createTeamInternal g "foo" _owner + owner' <- Util.randomUser + void $ Util.createTeam "foo" owner' [] + void $ Util.createTeam "foo" owner' [] + +testCreateBindingTeamWithCurrency :: TestM () +testCreateBindingTeamWithCurrency = do + _owner <- Util.randomUser + _ <- Util.createTeamInternal "foo" _owner -- Backwards compatible - assertQueue "create team" a (tActivateWithCurrency Nothing) + assertQueue "create team" (tActivateWithCurrency Nothing) -- Ensure currency is properly journaled - _owner <- Util.randomUser b - _ <- Util.createTeamInternalWithCurrency g "foo" _owner Currency.USD - assertQueue "create team" a (tActivateWithCurrency $ Just Currency.USD) - -testCreateTeamWithMembers :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testCreateTeamWithMembers g b c _ = do - owner <- Util.randomUser b - user1 <- Util.randomUser b - user2 <- Util.randomUser b + _owner <- Util.randomUser + _ <- Util.createTeamInternalWithCurrency "foo" _owner Currency.USD + assertQueue "create team" (tActivateWithCurrency $ Just Currency.USD) + +testCreateTeamWithMembers :: TestM () +testCreateTeamWithMembers = do + c <- view tsCannon + owner <- Util.randomUser + user1 <- Util.randomUser + user2 <- Util.randomUser let pp = Util.symmPermissions [CreateConversation, AddRemoveConvMember] let m1 = newTeamMember' pp user1 let m2 = newTeamMember' pp user2 - Util.connectUsers b owner (list1 user1 [user2]) + Util.connectUsers owner (list1 user1 [user2]) WS.bracketR3 c owner user1 user2 $ \(wsOwner, wsUser1, wsUser2) -> do - tid <- Util.createTeam g "foo" owner [m1, m2] - team <- Util.getTeam g owner tid - mem <- Util.getTeamMembers g owner tid + tid <- Util.createTeam "foo" owner [m1, m2] + team <- Util.getTeam owner tid + mem <- Util.getTeamMembers owner tid liftIO $ do assertEqual "members" (Set.fromList [newTeamMember' fullPermissions owner, m1, m2]) @@ -151,68 +145,71 @@ testCreateTeamWithMembers g b c _ = do e^.eventTeam @?= (team^.teamId) e^.eventData @?= Just (EdTeamCreate team) -testCreateOne2OneFailNonBindingTeamMembers :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testCreateOne2OneFailNonBindingTeamMembers g b _ a = do - owner <- Util.randomUser b +testCreateOne2OneFailNonBindingTeamMembers :: TestM () +testCreateOne2OneFailNonBindingTeamMembers = do + owner <- Util.randomUser let p1 = Util.symmPermissions [CreateConversation, AddRemoveConvMember] let p2 = Util.symmPermissions [CreateConversation, AddRemoveConvMember, AddTeamMember] - mem1 <- newTeamMember' p1 <$> Util.randomUser b - mem2 <- newTeamMember' p2 <$> Util.randomUser b - Util.connectUsers b owner (list1 (mem1^.userId) [mem2^.userId]) - tid <- Util.createTeam g "foo" owner [mem1, mem2] + mem1 <- newTeamMember' p1 <$> Util.randomUser + mem2 <- newTeamMember' p2 <$> Util.randomUser + Util.connectUsers owner (list1 (mem1^.userId) [mem2^.userId]) + tid <- Util.createTeam "foo" owner [mem1, mem2] -- Cannot create a 1-1 conversation, not connected and in the same team but not binding - Util.createOne2OneTeamConv g (mem1^.userId) (mem2^.userId) Nothing tid !!! do + Util.createOne2OneTeamConv (mem1^.userId) (mem2^.userId) Nothing tid !!! do const 404 === statusCode const "non-binding-team" === (Error.label . Util.decodeBody' "error label") -- Both have a binding team but not the same team - owner1 <- Util.randomUser b - tid1 <- Util.createTeamInternal g "foo" owner1 - assertQueue "create team" a tActivate - owner2 <- Util.randomUser b - void $ Util.createTeamInternal g "foo" owner2 - assertQueue "create another team" a tActivate - Util.createOne2OneTeamConv g owner1 owner2 Nothing tid1 !!! do + owner1 <- Util.randomUser + tid1 <- Util.createTeamInternal "foo" owner1 + assertQueue "create team" tActivate + owner2 <- Util.randomUser + void $ Util.createTeamInternal "foo" owner2 + assertQueue "create another team" tActivate + Util.createOne2OneTeamConv owner1 owner2 Nothing tid1 !!! do const 403 === statusCode const "non-binding-team-members" === (Error.label . Util.decodeBody' "error label") testCreateOne2OneWithMembers :: HasCallStack => Role -- ^ Role of the user who creates the conversation - -> Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testCreateOne2OneWithMembers (rolePermissions -> perms) g b c a = do - owner <- Util.randomUser b - tid <- Util.createTeamInternal g "foo" owner - assertQueue "create team" a tActivate - mem1 <- newTeamMember' perms <$> Util.randomUser b + -> TestM () +testCreateOne2OneWithMembers (rolePermissions -> perms) = do + c <- view tsCannon + owner <- Util.randomUser + tid <- Util.createTeamInternal "foo" owner + assertQueue "create team" tActivate + mem1 <- newTeamMember' perms <$> Util.randomUser WS.bracketR c (mem1^.userId) $ \wsMem1 -> do - Util.addTeamMemberInternal g tid mem1 + Util.addTeamMemberInternal tid mem1 checkTeamMemberJoin tid (mem1^.userId) wsMem1 - assertQueue "team member join" a $ tUpdate 2 [owner] + assertQueue "team member join" $ tUpdate 2 [owner] - void $ retryWhileN 10 repeatIf (Util.createOne2OneTeamConv g owner (mem1^.userId) Nothing tid) + void $ retryWhileN 10 repeatIf (Util.createOne2OneTeamConv owner (mem1^.userId) Nothing tid) -- Recreating a One2One is a no-op, returns a 200 - Util.createOne2OneTeamConv g owner (mem1^.userId) Nothing tid !!! const 200 === statusCode + Util.createOne2OneTeamConv owner (mem1^.userId) Nothing tid !!! const 200 === statusCode where - repeatIf :: Util.ResponseLBS -> Bool + repeatIf :: ResponseLBS -> Bool repeatIf r = statusCode r /= 201 -testAddTeamMember :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testAddTeamMember g b c _ = do - owner <- Util.randomUser b +testAddTeamMember :: TestM () +testAddTeamMember = do + c <- view tsCannon + g <- view tsGalley + owner <- Util.randomUser let p1 = Util.symmPermissions [CreateConversation, AddRemoveConvMember] let p2 = Util.symmPermissions [CreateConversation, AddRemoveConvMember, AddTeamMember] - mem1 <- newTeamMember' p1 <$> Util.randomUser b - mem2 <- newTeamMember' p2 <$> Util.randomUser b - Util.connectUsers b owner (list1 (mem1^.userId) [mem2^.userId]) - Util.connectUsers b (mem1^.userId) (list1 (mem2^.userId) []) - tid <- Util.createTeam g "foo" owner [mem1, mem2] + mem1 <- newTeamMember' p1 <$> Util.randomUser + mem2 <- newTeamMember' p2 <$> Util.randomUser + Util.connectUsers owner (list1 (mem1^.userId) [mem2^.userId]) + Util.connectUsers (mem1^.userId) (list1 (mem2^.userId) []) + tid <- Util.createTeam "foo" owner [mem1, mem2] - mem3 <- newTeamMember' p1 <$> Util.randomUser b + mem3 <- newTeamMember' p1 <$> Util.randomUser let payload = json (newNewTeamMember mem3) - Util.connectUsers b (mem1^.userId) (list1 (mem3^.userId) []) - Util.connectUsers b (mem2^.userId) (list1 (mem3^.userId) []) + Util.connectUsers (mem1^.userId) (list1 (mem3^.userId) []) + Util.connectUsers (mem2^.userId) (list1 (mem3^.userId) []) -- `mem1` lacks permission to add new team members post (g . paths ["teams", toByteString' tid, "members"] . zUser (mem1^.userId) . payload) !!! @@ -220,39 +217,41 @@ testAddTeamMember g b c _ = do WS.bracketRN c [owner, (mem1^.userId), (mem2^.userId), (mem3^.userId)] $ \[wsOwner, wsMem1, wsMem2, wsMem3] -> do -- `mem2` has `AddTeamMember` permission - Util.addTeamMember g (mem2^.userId) tid mem3 + Util.addTeamMember (mem2^.userId) tid mem3 mapConcurrently_ (checkTeamMemberJoin tid (mem3^.userId)) [wsOwner, wsMem1, wsMem2, wsMem3] -testAddTeamMemberCheckBound :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testAddTeamMemberCheckBound g b _ a = do - ownerBound <- Util.randomUser b - tidBound <- Util.createTeamInternal g "foo" ownerBound - assertQueue "create team" a tActivate +testAddTeamMemberCheckBound :: TestM () +testAddTeamMemberCheckBound = do + g <- view tsGalley + ownerBound <- Util.randomUser + tidBound <- Util.createTeamInternal "foo" ownerBound + assertQueue "create team" tActivate - rndMem <- newTeamMember' (Util.symmPermissions []) <$> Util.randomUser b + rndMem <- newTeamMember' (Util.symmPermissions []) <$> Util.randomUser -- Cannot add any users to bound teams post (g . paths ["teams", toByteString' tidBound, "members"] . zUser ownerBound . zConn "conn" . json (newNewTeamMember rndMem)) !!! const 403 === statusCode - owner <- Util.randomUser b - tid <- Util.createTeam g "foo" owner [] + owner <- Util.randomUser + tid <- Util.createTeam "foo" owner [] -- Cannot add bound users to any teams let boundMem = newTeamMember' (Util.symmPermissions []) ownerBound post (g . paths ["teams", toByteString' tid, "members"] . zUser owner . zConn "conn" . json (newNewTeamMember boundMem)) !!! const 403 === statusCode -testAddTeamMemberInternal :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testAddTeamMemberInternal g b c a = do - owner <- Util.randomUser b - tid <- Util.createTeam g "foo" owner [] +testAddTeamMemberInternal :: TestM () +testAddTeamMemberInternal = do + c <- view tsCannon + owner <- Util.randomUser + tid <- Util.createTeam "foo" owner [] let p1 = Util.symmPermissions [GetBilling] -- permissions are irrelevant on internal endpoint - mem1 <- newTeamMember' p1 <$> Util.randomUser b + mem1 <- newTeamMember' p1 <$> Util.randomUser WS.bracketRN c [owner, mem1^.userId] $ \[wsOwner, wsMem1] -> do - Util.addTeamMemberInternal g tid mem1 + Util.addTeamMemberInternal tid mem1 liftIO . void $ mapConcurrently (checkJoinEvent tid (mem1^.userId)) [wsOwner, wsMem1] - assertQueue "tem member join" a $ tUpdate 2 [owner] - void $ Util.getTeamMemberInternal g tid (mem1^.userId) + assertQueue "tem member join" $ tUpdate 2 [owner] + void $ Util.getTeamMemberInternal tid (mem1^.userId) where checkJoinEvent tid usr w = WS.assertMatch_ timeout w $ \notif -> do ntfTransient notif @?= False @@ -261,27 +260,29 @@ testAddTeamMemberInternal g b c a = do e^.eventTeam @?= tid e^.eventData @?= Just (EdMemberJoin usr) -testRemoveTeamMember :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testRemoveTeamMember g b c _ = do - owner <- Util.randomUser b +testRemoveTeamMember :: TestM () +testRemoveTeamMember = do + c <- view tsCannon + g <- view tsGalley + owner <- Util.randomUser let p1 = Util.symmPermissions [AddRemoveConvMember] let p2 = Util.symmPermissions [AddRemoveConvMember, RemoveTeamMember] - mem1 <- newTeamMember' p1 <$> Util.randomUser b - mem2 <- newTeamMember' p2 <$> Util.randomUser b - mext1 <- Util.randomUser b - mext2 <- Util.randomUser b - mext3 <- Util.randomUser b - Util.connectUsers b owner (list1 (mem1^.userId) [mem2^.userId, mext1, mext2, mext3]) - tid <- Util.createTeam g "foo" owner [mem1, mem2] + mem1 <- newTeamMember' p1 <$> Util.randomUser + mem2 <- newTeamMember' p2 <$> Util.randomUser + mext1 <- Util.randomUser + mext2 <- Util.randomUser + mext3 <- Util.randomUser + Util.connectUsers owner (list1 (mem1^.userId) [mem2^.userId, mext1, mext2, mext3]) + tid <- Util.createTeam "foo" owner [mem1, mem2] -- Managed conversation: - void $ Util.createManagedConv g owner tid [] (Just "gossip") Nothing Nothing + void $ Util.createManagedConv owner tid [] (Just "gossip") Nothing Nothing -- Regular conversation: - cid2 <- Util.createTeamConv g owner tid [mem1^.userId, mem2^.userId, mext1] (Just "blaa") Nothing Nothing + cid2 <- Util.createTeamConv owner tid [mem1^.userId, mem2^.userId, mext1] (Just "blaa") Nothing Nothing -- Member external 2 is a guest and not a part of any conversation that mem1 is a part of - void $ Util.createTeamConv g owner tid [mem2^.userId, mext2] (Just "blaa") Nothing Nothing + void $ Util.createTeamConv owner tid [mem2^.userId, mext2] (Just "blaa") Nothing Nothing -- Member external 3 is a guest and part of a conversation that mem1 is a part of - cid3 <- Util.createTeamConv g owner tid [mem1^.userId, mext3] (Just "blaa") Nothing Nothing + cid3 <- Util.createTeamConv owner tid [mem1^.userId, mext3] (Just "blaa") Nothing Nothing WS.bracketRN c [owner, mem1^.userId, mem2^.userId, mext1, mext2, mext3] $ \ws@[wsOwner, wsMem1, wsMem2, wsMext1, _wsMext2, wsMext3] -> do -- `mem1` lacks permission to remove team members @@ -299,25 +300,27 @@ testRemoveTeamMember g b c _ = do ) !!! const 200 === statusCode -- Ensure that `mem1` is still a user (tid is not a binding team) - Util.ensureDeletedState b False owner (mem1^.userId) + Util.ensureDeletedState False owner (mem1^.userId) mapConcurrently_ (checkTeamMemberLeave tid (mem1^.userId)) [wsOwner, wsMem1, wsMem2] checkConvMemberLeaveEvent cid2 (mem1^.userId) wsMext1 checkConvMemberLeaveEvent cid3 (mem1^.userId) wsMext3 WS.assertNoEvent timeout ws -testRemoveBindingTeamMember :: Bool -> Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testRemoveBindingTeamMember ownerHasPassword g b c a = do - owner <- Util.randomUser' ownerHasPassword b - tid <- Util.createTeamInternal g "foo" owner - assertQueue "create team" a tActivate - mext <- Util.randomUser b +testRemoveBindingTeamMember :: Bool -> TestM () +testRemoveBindingTeamMember ownerHasPassword = do + g <- view tsGalley + c <- view tsCannon + owner <- Util.randomUser' ownerHasPassword + tid <- Util.createTeamInternal "foo" owner + assertQueue "create team" tActivate + mext <- Util.randomUser let p1 = Util.symmPermissions [AddRemoveConvMember] - mem1 <- newTeamMember' p1 <$> Util.randomUser b - Util.addTeamMemberInternal g tid mem1 - assertQueue "team member join" a $ tUpdate 2 [owner] - Util.connectUsers b owner (singleton mext) - cid1 <- Util.createTeamConv g owner tid [(mem1^.userId), mext] (Just "blaa") Nothing Nothing + mem1 <- newTeamMember' p1 <$> Util.randomUser + Util.addTeamMemberInternal tid mem1 + assertQueue "team member join" $ tUpdate 2 [owner] + Util.connectUsers owner (singleton mext) + cid1 <- Util.createTeamConv owner tid [(mem1^.userId), mext] (Just "blaa") Nothing Nothing when ownerHasPassword $ do -- Deleting from a binding team with empty body is invalid @@ -346,7 +349,7 @@ testRemoveBindingTeamMember ownerHasPassword g b c a = do const "access-denied" === (Error.label . Util.decodeBody' "error label") -- Mem1 is still part of Wire - Util.ensureDeletedState b False owner (mem1^.userId) + Util.ensureDeletedState False owner (mem1^.userId) WS.bracketR2 c owner mext $ \(wsOwner, wsMext) -> do if ownerHasPassword @@ -371,52 +374,53 @@ testRemoveBindingTeamMember ownerHasPassword g b c a = do checkTeamMemberLeave tid (mem1^.userId) wsOwner checkConvMemberLeaveEvent cid1 (mem1^.userId) wsMext - assertQueue "team member leave" a $ tUpdate 1 [owner] + assertQueue "team member leave" $ tUpdate 1 [owner] WS.assertNoEvent timeout [wsMext] -- Mem1 is now gone from Wire - Util.ensureDeletedState b True owner (mem1^.userId) + Util.ensureDeletedState True owner (mem1^.userId) -testAddTeamConv :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testAddTeamConv g b c _ = do - owner <- Util.randomUser b - extern <- Util.randomUser b +testAddTeamConv :: TestM () +testAddTeamConv = do + c <- view tsCannon + owner <- Util.randomUser + extern <- Util.randomUser let p = Util.symmPermissions [CreateConversation, AddRemoveConvMember] - mem1 <- newTeamMember' p <$> Util.randomUser b - mem2 <- newTeamMember' p <$> Util.randomUser b + mem1 <- newTeamMember' p <$> Util.randomUser + mem2 <- newTeamMember' p <$> Util.randomUser - Util.connectUsers b owner (list1 (mem1^.userId) [extern, mem2^.userId]) - tid <- Util.createTeam g "foo" owner [mem2] + Util.connectUsers owner (list1 (mem1^.userId) [extern, mem2^.userId]) + tid <- Util.createTeam "foo" owner [mem2] WS.bracketRN c [owner, extern, mem1^.userId, mem2^.userId] $ \ws@[wsOwner, wsExtern, wsMem1, wsMem2] -> do -- Managed conversation: - cid1 <- Util.createManagedConv g owner tid [] (Just "gossip") Nothing Nothing + cid1 <- Util.createManagedConv owner tid [] (Just "gossip") Nothing Nothing checkConvCreateEvent cid1 wsOwner checkConvCreateEvent cid1 wsMem2 -- Regular conversation: - cid2 <- Util.createTeamConv g owner tid [extern] (Just "blaa") Nothing Nothing + cid2 <- Util.createTeamConv owner tid [extern] (Just "blaa") Nothing Nothing checkConvCreateEvent cid2 wsOwner checkConvCreateEvent cid2 wsExtern -- mem2 is not a conversation member but still receives an event that -- a new team conversation has been created: checkTeamConvCreateEvent tid cid2 wsMem2 - Util.addTeamMember g owner tid mem1 + Util.addTeamMember owner tid mem1 checkTeamMemberJoin tid (mem1^.userId) wsOwner checkTeamMemberJoin tid (mem1^.userId) wsMem1 checkTeamMemberJoin tid (mem1^.userId) wsMem2 -- New team members are added automatically to managed conversations ... - Util.assertConvMember g (mem1^.userId) cid1 + Util.assertConvMember (mem1^.userId) cid1 -- ... but not to regular ones. - Util.assertNotConvMember g (mem1^.userId) cid2 + Util.assertNotConvMember (mem1^.userId) cid2 -- Managed team conversations get all team members added implicitly. - cid3 <- Util.createManagedConv g owner tid [] (Just "blup") Nothing Nothing + cid3 <- Util.createManagedConv owner tid [] (Just "blup") Nothing Nothing for_ [owner, mem1^.userId, mem2^.userId] $ \u -> - Util.assertConvMember g u cid3 + Util.assertConvMember u cid3 checkConvCreateEvent cid3 wsOwner checkConvCreateEvent cid3 wsMem1 @@ -424,25 +428,25 @@ testAddTeamConv g b c _ = do -- Non team members are never added implicitly. for_ [cid1, cid3] $ - Util.assertNotConvMember g extern + Util.assertNotConvMember extern WS.assertNoEvent timeout ws -testAddTeamConvAsExternalPartner :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testAddTeamConvAsExternalPartner g b _ a = do - owner <- Util.randomUser b - memMember1 <- newTeamMember' (rolePermissions RoleMember) <$> Util.randomUser b - memMember2 <- newTeamMember' (rolePermissions RoleMember) <$> Util.randomUser b - memExternalPartner <- newTeamMember' (rolePermissions RoleExternalPartner) <$> Util.randomUser b - Util.connectUsers b owner +testAddTeamConvAsExternalPartner :: TestM () +testAddTeamConvAsExternalPartner = do + owner <- Util.randomUser + memMember1 <- newTeamMember' (rolePermissions RoleMember) <$> Util.randomUser + memMember2 <- newTeamMember' (rolePermissions RoleMember) <$> Util.randomUser + memExternalPartner <- newTeamMember' (rolePermissions RoleExternalPartner) <$> Util.randomUser + Util.connectUsers owner (list1 (memMember1^.userId) [memExternalPartner^.userId, memMember2^.userId]) - tid <- Util.createTeamInternal g "foo" owner - assertQueue "create team" a tActivate + tid <- Util.createTeamInternal "foo" owner + assertQueue "create team" tActivate forM_ [(2, memMember1), (3, memMember2), (4, memExternalPartner)] $ \(i, mem) -> do - Util.addTeamMemberInternal g tid mem - assertQueue ("team member join #" ++ show i) a $ tUpdate i [owner] + Util.addTeamMemberInternal tid mem + assertQueue ("team member join #" ++ show i) $ tUpdate i [owner] let acc = Just $ Set.fromList [InviteAccess, CodeAccess] - Util.createTeamConvAccessRaw g + Util.createTeamConvAccessRaw (memExternalPartner^.userId) tid [memMember1^.userId, memMember2^.userId] @@ -451,10 +455,11 @@ testAddTeamConvAsExternalPartner g b _ a = do const 403 === statusCode const "operation-denied" === (Error.label . Util.decodeBody' "error label") -testAddManagedConv :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testAddManagedConv g b _c _ = do - owner <- Util.randomUser b - tid <- Util.createTeam g "foo" owner [] +testAddManagedConv :: TestM () +testAddManagedConv = do + g <- view tsGalley + owner <- Util.randomUser + tid <- Util.createTeam "foo" owner [] let tinfo = ConvTeamInfo tid True let conv = NewConvManaged $ NewConv [owner] (Just "blah") (Set.fromList []) Nothing (Just tinfo) Nothing Nothing @@ -467,82 +472,84 @@ testAddManagedConv g b _c _ = do ) !!! const 400 === statusCode -testAddTeamConvWithUsers :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testAddTeamConvWithUsers g b _ _ = do - owner <- Util.randomUser b - extern <- Util.randomUser b - Util.connectUsers b owner (list1 extern []) - tid <- Util.createTeam g "foo" owner [] +testAddTeamConvWithUsers :: TestM () +testAddTeamConvWithUsers = do + owner <- Util.randomUser + extern <- Util.randomUser + Util.connectUsers owner (list1 extern []) + tid <- Util.createTeam "foo" owner [] -- Create managed team conversation and erroneously specify external users. - cid <- Util.createManagedConv g owner tid [extern] (Just "gossip") Nothing Nothing + cid <- Util.createManagedConv owner tid [extern] (Just "gossip") Nothing Nothing -- External users have been ignored. - Util.assertNotConvMember g extern cid + Util.assertNotConvMember extern cid -- Team members are present. - Util.assertConvMember g owner cid + Util.assertConvMember owner cid -testAddTeamMemberToConv :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testAddTeamMemberToConv g b _ _ = do - owner <- Util.randomUser b +testAddTeamMemberToConv :: TestM () +testAddTeamMemberToConv = do + owner <- Util.randomUser let p = Util.symmPermissions [AddRemoveConvMember] - mem1 <- newTeamMember' p <$> Util.randomUser b - mem2 <- newTeamMember' p <$> Util.randomUser b - mem3 <- newTeamMember' (Util.symmPermissions []) <$> Util.randomUser b + mem1 <- newTeamMember' p <$> Util.randomUser + mem2 <- newTeamMember' p <$> Util.randomUser + mem3 <- newTeamMember' (Util.symmPermissions []) <$> Util.randomUser - Util.connectUsers b owner (list1 (mem1^.userId) [mem2^.userId, mem3^.userId]) - tid <- Util.createTeam g "foo" owner [mem1, mem2, mem3] + Util.connectUsers owner (list1 (mem1^.userId) [mem2^.userId, mem3^.userId]) + tid <- Util.createTeam "foo" owner [mem1, mem2, mem3] -- Team owner creates new regular team conversation: - cid <- Util.createTeamConv g owner tid [] (Just "blaa") Nothing Nothing + cid <- Util.createTeamConv owner tid [] (Just "blaa") Nothing Nothing -- Team member 1 (who is *not* a member of the new conversation) -- can add other team members without requiring a user connection -- thanks to both being team members and member 1 having the permission -- `AddRemoveConvMember`. - Util.assertNotConvMember g (mem1^.userId) cid - Util.postMembers g (mem1^.userId) (list1 (mem2^.userId) []) cid !!! const 200 === statusCode - Util.assertConvMember g (mem2^.userId) cid + Util.assertNotConvMember (mem1^.userId) cid + Util.postMembers (mem1^.userId) (list1 (mem2^.userId) []) cid !!! const 200 === statusCode + Util.assertConvMember (mem2^.userId) cid -- OTOH, team member 3 can not add another team member since it -- lacks the required permission - Util.assertNotConvMember g (mem3^.userId) cid - Util.postMembers g (mem3^.userId) (list1 (mem1^.userId) []) cid !!! do + Util.assertNotConvMember (mem3^.userId) cid + Util.postMembers (mem3^.userId) (list1 (mem1^.userId) []) cid !!! do const 403 === statusCode const "operation-denied" === (Error.label . Util.decodeBody' "error label") testUpdateTeamConv :: Role -- ^ Role of the user who creates the conversation - -> Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testUpdateTeamConv (rolePermissions -> perms) g b _ _ = do - owner <- Util.randomUser b - member <- Util.randomUser b - Util.connectUsers b owner (list1 member []) - tid <- Util.createTeam g "foo" owner [newTeamMember member perms Nothing] - cid <- Util.createTeamConv g owner tid [member] (Just "gossip") Nothing Nothing - resp <- updateTeamConv g member cid (ConversationRename "not gossip") + -> TestM () +testUpdateTeamConv (rolePermissions -> perms) = do + owner <- Util.randomUser + member <- Util.randomUser + Util.connectUsers owner (list1 member []) + tid <- Util.createTeam "foo" owner [newTeamMember member perms Nothing] + cid <- Util.createTeamConv owner tid [member] (Just "gossip") Nothing Nothing + resp <- updateTeamConv member cid (ConversationRename "not gossip") liftIO $ assertEqual "status" (if ModifyConvMetadata `elem` (perms ^. self) then 200 else 403) (statusCode resp) -testDeleteTeam :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testDeleteTeam g b c a = do - owner <- Util.randomUser b +testDeleteTeam :: TestM () +testDeleteTeam = do + g <- view tsGalley + c <- view tsCannon + owner <- Util.randomUser let p = Util.symmPermissions [AddRemoveConvMember] - member <- newTeamMember' p <$> Util.randomUser b - extern <- Util.randomUser b - Util.connectUsers b owner (list1 (member^.userId) [extern]) + member <- newTeamMember' p <$> Util.randomUser + extern <- Util.randomUser + Util.connectUsers owner (list1 (member^.userId) [extern]) - tid <- Util.createTeam g "foo" owner [member] - cid1 <- Util.createTeamConv g owner tid [] (Just "blaa") Nothing Nothing - cid2 <- Util.createManagedConv g owner tid [] (Just "blup") Nothing Nothing + tid <- Util.createTeam "foo" owner [member] + cid1 <- Util.createTeamConv owner tid [] (Just "blaa") Nothing Nothing + cid2 <- Util.createManagedConv owner tid [] (Just "blup") Nothing Nothing - Util.assertConvMember g owner cid2 - Util.assertConvMember g (member^.userId) cid2 - Util.assertNotConvMember g extern cid2 + Util.assertConvMember owner cid2 + Util.assertConvMember (member^.userId) cid2 + Util.assertNotConvMember extern cid2 - Util.postMembers g owner (list1 extern []) cid1 !!! const 200 === statusCode - Util.assertConvMember g owner cid1 - Util.assertConvMember g extern cid1 - Util.assertNotConvMember g (member^.userId) cid1 + Util.postMembers owner (list1 extern []) cid1 !!! const 200 === statusCode + Util.assertConvMember owner cid1 + Util.assertConvMember extern cid1 + Util.assertNotConvMember (member^.userId) cid1 void $ WS.bracketR3 c owner extern (member^.userId) $ \(wsOwner, wsExtern, wsMember) -> do delete (g . paths ["teams", toByteString' tid] . zUser owner . zConn "conn") !!! @@ -563,32 +570,34 @@ testDeleteTeam g b c a = do for_ [owner, extern, member^.userId] $ \u -> do -- Ensure no user got deleted - Util.ensureDeletedState b False owner u + Util.ensureDeletedState False owner u for_ [cid1, cid2] $ \x -> do - Util.getConv g u x !!! const 404 === statusCode - Util.getSelfMember g u x !!! do + Util.getConv u x !!! const 404 === statusCode + Util.getSelfMember u x !!! do const 200 === statusCode const (Just Null) === Util.decodeBody - assertQueueEmpty a - -testDeleteBindingTeam :: Bool -> Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testDeleteBindingTeam ownerHasPassword g b c a = do - owner <- Util.randomUser' ownerHasPassword b - tid <- Util.createTeamInternal g "foo" owner - assertQueue "create team" a tActivate + assertQueueEmpty + +testDeleteBindingTeam :: Bool -> TestM () +testDeleteBindingTeam ownerHasPassword = do + g <- view tsGalley + c <- view tsCannon + owner <- Util.randomUser' ownerHasPassword + tid <- Util.createTeamInternal "foo" owner + assertQueue "create team" tActivate let p1 = Util.symmPermissions [AddRemoveConvMember] - mem1 <- newTeamMember' p1 <$> Util.randomUser b + mem1 <- newTeamMember' p1 <$> Util.randomUser let p2 = Util.symmPermissions [AddRemoveConvMember] - mem2 <- newTeamMember' p2 <$> Util.randomUser b + mem2 <- newTeamMember' p2 <$> Util.randomUser let p3 = Util.symmPermissions [AddRemoveConvMember] - mem3 <- newTeamMember' p3 <$> Util.randomUser b - Util.addTeamMemberInternal g tid mem1 - assertQueue "team member join 2" a $ tUpdate 2 [owner] - Util.addTeamMemberInternal g tid mem2 - assertQueue "team member join 3" a $ tUpdate 3 [owner] - Util.addTeamMemberInternal g tid mem3 - assertQueue "team member join 4" a $ tUpdate 4 [owner] - extern <- Util.randomUser b + mem3 <- newTeamMember' p3 <$> Util.randomUser + Util.addTeamMemberInternal tid mem1 + assertQueue "team member join 2" $ tUpdate 2 [owner] + Util.addTeamMemberInternal tid mem2 + assertQueue "team member join 3" $ tUpdate 3 [owner] + Util.addTeamMemberInternal tid mem3 + assertQueue "team member join 4" $ tUpdate 4 [owner] + extern <- Util.randomUser delete ( g . paths ["teams", toByteString' tid] @@ -607,7 +616,7 @@ testDeleteBindingTeam ownerHasPassword g b c a = do then Just $ PlainTextPassword Util.defPassword else Nothing)) ) !!! const 202 === statusCode - assertQueue "team member leave 1" a $ tUpdate 3 [owner] + assertQueue "team member leave 1" $ tUpdate 3 [owner] void $ WS.bracketRN c [owner, (mem1^.userId), (mem2^.userId), extern] $ \[wsOwner, wsMember1, wsMember2, wsExtern] -> do delete ( g @@ -630,35 +639,37 @@ testDeleteBindingTeam ownerHasPassword g b c a = do WS.assertNoEvent (1 # Second) [wsExtern] -- Note that given the async nature of team deletion, we may -- have other events in the queue (such as TEAM_UPDATE) - tryAssertQueue 10 "team delete, should be there" a tDelete + tryAssertQueue 10 "team delete, should be there" tDelete forM_ [owner, (mem1^.userId), (mem2^.userId)] $ -- Ensure users are marked as deleted; since we already -- received the event, should _really_ be deleted - Util.ensureDeletedState b True extern + Util.ensureDeletedState True extern -- Let's clean it up, just in case - ensureQueueEmpty a + ensureQueueEmpty -testDeleteTeamConv :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testDeleteTeamConv g b c _ = do - owner <- Util.randomUser b +testDeleteTeamConv :: TestM () +testDeleteTeamConv = do + g <- view tsGalley + c <- view tsCannon + owner <- Util.randomUser let p = Util.symmPermissions [DeleteConversation] - member <- newTeamMember' p <$> Util.randomUser b - extern <- Util.randomUser b - Util.connectUsers b owner (list1 (member^.userId) [extern]) + member <- newTeamMember' p <$> Util.randomUser + extern <- Util.randomUser + Util.connectUsers owner (list1 (member^.userId) [extern]) - tid <- Util.createTeam g "foo" owner [member] - cid1 <- Util.createTeamConv g owner tid [] (Just "blaa") Nothing Nothing + tid <- Util.createTeam "foo" owner [member] + cid1 <- Util.createTeamConv owner tid [] (Just "blaa") Nothing Nothing let access = ConversationAccessUpdate [InviteAccess, CodeAccess] ActivatedAccessRole - putAccessUpdate g owner cid1 access !!! const 200 === statusCode - code <- decodeConvCodeEvent <$> (postConvCode g owner cid1 (postConvCode owner cid1 Util.assertConvMember g u cid1 - for_ [owner, member^.userId] $ \u -> Util.assertConvMember g u cid2 + for_ [owner, member^.userId, extern] $ \u -> Util.assertConvMember u cid1 + for_ [owner, member^.userId] $ \u -> Util.assertConvMember u cid2 WS.bracketR3 c owner extern (member^.userId) $ \(wsOwner, wsExtern, wsMember) -> do delete ( g @@ -684,10 +695,10 @@ testDeleteTeamConv g b c _ = do for_ [cid1, cid2] $ \x -> for_ [owner, member^.userId, extern] $ \u -> do - Util.getConv g u x !!! const 404 === statusCode - Util.assertNotConvMember g u x + Util.getConv u x !!! const 404 === statusCode + Util.assertNotConvMember u x - postConvCodeCheck g code !!! const 404 === statusCode + postConvCodeCheck code !!! const 404 === statusCode where checkTeamConvDeleteEvent tid cid w = WS.assertMatch_ timeout w $ \notif -> do ntfTransient notif @?= False @@ -703,13 +714,15 @@ testDeleteTeamConv g b c _ = do evtConv e @?= cid evtData e @?= Nothing -testUpdateTeam :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testUpdateTeam g b c _ = do - owner <- Util.randomUser b +testUpdateTeam :: TestM () +testUpdateTeam = do + g <- view tsGalley + c <- view tsCannon + owner <- Util.randomUser let p = Util.symmPermissions [DeleteConversation] - member <- newTeamMember' p <$> Util.randomUser b - Util.connectUsers b owner (list1 (member^.userId) []) - tid <- Util.createTeam g "foo" owner [member] + member <- newTeamMember' p <$> Util.randomUser + Util.connectUsers owner (list1 (member^.userId) []) + tid <- Util.createTeam "foo" owner [member] let bad = object ["name" .= T.replicate 100 "too large"] put ( g . paths ["teams", toByteString' tid] @@ -740,13 +753,15 @@ testUpdateTeam g b c _ = do e^.eventTeam @?= tid e^.eventData @?= Just (EdTeamUpdate upd) -testUpdateTeamMember :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testUpdateTeamMember g b c a = do - owner <- Util.randomUser b +testUpdateTeamMember :: TestM () +testUpdateTeamMember = do + g <- view tsGalley + c <- view tsCannon + owner <- Util.randomUser let p = Util.symmPermissions [SetMemberPermissions] - member <- newTeamMember' p <$> Util.randomUser b - Util.connectUsers b owner (list1 (member^.userId) []) - tid <- Util.createTeam g "foo" owner [member] + member <- newTeamMember' p <$> Util.randomUser + Util.connectUsers owner (list1 (member^.userId) []) + tid <- Util.createTeam "foo" owner [member] -- Must have at least 1 member with full permissions let changeOwner = newNewTeamMember (newTeamMember' p owner) put ( g @@ -765,7 +780,7 @@ testUpdateTeamMember g b c a = do . zConn "conn" . json changeMember ) !!! const 200 === statusCode - member' <- Util.getTeamMember g owner tid (member^.userId) + member' <- Util.getTeamMember owner tid (member^.userId) liftIO $ assertEqual "permissions" (member'^.permissions) (changeMember^.ntmNewTeamMember.permissions) checkTeamMemberUpdateEvent tid (member^.userId) wsOwner (pure fullPermissions) checkTeamMemberUpdateEvent tid (member^.userId) wsMember (pure fullPermissions) @@ -778,13 +793,13 @@ testUpdateTeamMember g b c a = do . zConn "conn" . json changeOwner ) !!! const 200 === statusCode - owner' <- Util.getTeamMember g (member^.userId) tid owner + owner' <- Util.getTeamMember (member^.userId) tid owner liftIO $ assertEqual "permissions" (owner'^.permissions) (changeOwner^.ntmNewTeamMember.permissions) -- owner no longer has GetPermissions, but she can still see the update because it's about her! checkTeamMemberUpdateEvent tid owner wsOwner (pure p) checkTeamMemberUpdateEvent tid owner wsMember (pure p) WS.assertNoEvent timeout [wsOwner, wsMember] - assertQueueEmpty a + assertQueueEmpty where checkTeamMemberUpdateEvent tid uid w mPerm = WS.assertMatch_ timeout w $ \notif -> do ntfTransient notif @?= False @@ -793,22 +808,23 @@ testUpdateTeamMember g b c a = do e^.eventTeam @?= tid e^.eventData @?= Just (EdMemberUpdate uid mPerm) -testUpdateTeamStatus :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -testUpdateTeamStatus g b _ a = do - owner <- Util.randomUser b - tid <- Util.createTeamInternal g "foo" owner - assertQueue "create team" a tActivate +testUpdateTeamStatus :: TestM () +testUpdateTeamStatus = do + g <- view tsGalley + owner <- Util.randomUser + tid <- Util.createTeamInternal "foo" owner + assertQueue "create team" tActivate -- Check for idempotency - Util.changeTeamStatus g tid Active - assertQueueEmpty a - Util.changeTeamStatus g tid Suspended - assertQueue "suspend first time" a tSuspend - Util.changeTeamStatus g tid Suspended - assertQueueEmpty a - Util.changeTeamStatus g tid Suspended - assertQueueEmpty a - Util.changeTeamStatus g tid Active - assertQueue "activate again" a tActivate + Util.changeTeamStatus tid Active + assertQueueEmpty + Util.changeTeamStatus tid Suspended + assertQueue "suspend first time" tSuspend + Util.changeTeamStatus tid Suspended + assertQueueEmpty + Util.changeTeamStatus tid Suspended + assertQueueEmpty + Util.changeTeamStatus tid Active + assertQueue "activate again" tActivate void $ put ( g . paths ["i", "teams", toByteString' tid, "status"] @@ -817,7 +833,7 @@ testUpdateTeamStatus g b _ a = do const 403 === statusCode const "invalid-team-status-update" === (Error.label . Util.decodeBody' "error label") -checkUserDeleteEvent :: HasCallStack => UserId -> WS.WebSocket -> Http () +checkUserDeleteEvent :: HasCallStack => UserId -> WS.WebSocket -> TestM () checkUserDeleteEvent uid w = WS.assertMatch_ timeout w $ \notif -> do let j = Object $ List1.head (ntfPayload notif) let etype = j ^? key "type" . _String @@ -825,7 +841,7 @@ checkUserDeleteEvent uid w = WS.assertMatch_ timeout w $ \notif -> do etype @?= Just "user.delete" euser @?= Just (UUID.toText (toUUID uid)) -checkTeamMemberJoin :: HasCallStack => TeamId -> UserId -> WS.WebSocket -> Http () +checkTeamMemberJoin :: HasCallStack => TeamId -> UserId -> WS.WebSocket -> TestM () checkTeamMemberJoin tid uid w = WS.awaitMatch_ timeout w $ \notif -> do ntfTransient notif @?= False let e = List1.head (WS.unpackPayload notif) @@ -833,7 +849,7 @@ checkTeamMemberJoin tid uid w = WS.awaitMatch_ timeout w $ \notif -> do e^.eventTeam @?= tid e^.eventData @?= Just (EdMemberJoin uid) -checkTeamMemberLeave :: HasCallStack => TeamId -> UserId -> WS.WebSocket -> Http () +checkTeamMemberLeave :: HasCallStack => TeamId -> UserId -> WS.WebSocket -> TestM () checkTeamMemberLeave tid usr w = WS.assertMatch_ timeout w $ \notif -> do ntfTransient notif @?= False let e = List1.head (WS.unpackPayload notif) @@ -841,7 +857,7 @@ checkTeamMemberLeave tid usr w = WS.assertMatch_ timeout w $ \notif -> do e^.eventTeam @?= tid e^.eventData @?= Just (EdMemberLeave usr) -checkTeamConvCreateEvent :: HasCallStack => TeamId -> ConvId -> WS.WebSocket -> Http () +checkTeamConvCreateEvent :: HasCallStack => TeamId -> ConvId -> WS.WebSocket -> TestM () checkTeamConvCreateEvent tid cid w = WS.assertMatch_ timeout w $ \notif -> do ntfTransient notif @?= False let e = List1.head (WS.unpackPayload notif) @@ -849,7 +865,7 @@ checkTeamConvCreateEvent tid cid w = WS.assertMatch_ timeout w $ \notif -> do e^.eventTeam @?= tid e^.eventData @?= Just (EdConvCreate cid) -checkConvCreateEvent :: HasCallStack => ConvId -> WS.WebSocket -> Http () +checkConvCreateEvent :: HasCallStack => ConvId -> WS.WebSocket -> TestM () checkConvCreateEvent cid w = WS.assertMatch_ timeout w $ \notif -> do ntfTransient notif @?= False let e = List1.head (WS.unpackPayload notif) @@ -858,7 +874,7 @@ checkConvCreateEvent cid w = WS.assertMatch_ timeout w $ \notif -> do Just (Conv.EdConversation x) -> cnvId x @?= cid other -> assertFailure $ "Unexpected event data: " <> show other -checkTeamDeleteEvent :: HasCallStack => TeamId -> WS.WebSocket -> Http () +checkTeamDeleteEvent :: HasCallStack => TeamId -> WS.WebSocket -> TestM () checkTeamDeleteEvent tid w = WS.assertMatch_ timeout w $ \notif -> do ntfTransient notif @?= False let e = List1.head (WS.unpackPayload notif) @@ -866,7 +882,7 @@ checkTeamDeleteEvent tid w = WS.assertMatch_ timeout w $ \notif -> do e^.eventTeam @?= tid e^.eventData @?= Nothing -checkConvDeleteEvent :: HasCallStack => ConvId -> WS.WebSocket -> Http () +checkConvDeleteEvent :: HasCallStack => ConvId -> WS.WebSocket -> TestM () checkConvDeleteEvent cid w = WS.assertMatch_ timeout w $ \notif -> do ntfTransient notif @?= False let e = List1.head (WS.unpackPayload notif) @@ -874,7 +890,7 @@ checkConvDeleteEvent cid w = WS.assertMatch_ timeout w $ \notif -> do evtConv e @?= cid evtData e @?= Nothing -checkConvMemberLeaveEvent :: HasCallStack => ConvId -> UserId -> WS.WebSocket -> Http () +checkConvMemberLeaveEvent :: HasCallStack => ConvId -> UserId -> WS.WebSocket -> TestM () checkConvMemberLeaveEvent cid usr w = WS.assertMatch_ timeout w $ \notif -> do ntfTransient notif @?= False let e = List1.head (WS.unpackPayload notif) @@ -884,22 +900,23 @@ checkConvMemberLeaveEvent cid usr w = WS.assertMatch_ timeout w $ \notif -> do Just (Conv.EdMembers mm) -> mm @?= Conv.Members [usr] other -> assertFailure $ "Unexpected event data: " <> show other -postCryptoBroadcastMessageJson :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -postCryptoBroadcastMessageJson g b c a = do +postCryptoBroadcastMessageJson :: TestM () +postCryptoBroadcastMessageJson = do + c <- view tsCannon -- Team1: Alice, Bob. Team2: Charlie. Regular user: Dan. Connect Alice,Charlie,Dan - (alice, ac) <- randomUserWithClient b (someLastPrekeys !! 0) - (bob, bc) <- randomUserWithClient b (someLastPrekeys !! 1) - (charlie,cc) <- randomUserWithClient b (someLastPrekeys !! 2) - (dan, dc) <- randomUserWithClient b (someLastPrekeys !! 3) - connectUsers b alice (list1 charlie [dan]) - tid1 <- createTeamInternal g "foo" alice - assertQueue "" a tActivate - addTeamMemberInternal g tid1 $ newTeamMember' (symmPermissions []) bob - assertQueue "" a $ tUpdate 2 [alice] - _ <- createTeamInternal g "foo" charlie - assertQueue "" a tActivate + (alice, ac) <- randomUserWithClient (someLastPrekeys !! 0) + (bob, bc) <- randomUserWithClient (someLastPrekeys !! 1) + (charlie,cc) <- randomUserWithClient (someLastPrekeys !! 2) + (dan, dc) <- randomUserWithClient (someLastPrekeys !! 3) + connectUsers alice (list1 charlie [dan]) + tid1 <- createTeamInternal "foo" alice + assertQueue "" tActivate + addTeamMemberInternal tid1 $ newTeamMember' (symmPermissions []) bob + assertQueue "" $ tUpdate 2 [alice] + _ <- createTeamInternal "foo" charlie + assertQueue "" tActivate -- A second client for Alice - ac2 <- randomClient b alice (someLastPrekeys !! 4) + ac2 <- randomClient alice (someLastPrekeys !! 4) -- Complete: Alice broadcasts a message to Bob,Charlie,Dan and herself let t = 1 # Second -- WS receive timeout let msg = [(alice, ac2, "ciphertext0"), (bob, bc, "ciphertext1"), (charlie, cc, "ciphertext2"), (dan, dc, "ciphertext3")] @@ -907,7 +924,7 @@ postCryptoBroadcastMessageJson g b c a = do -- Alice's clients 1 and 2 listen to their own messages only WS.bracketR (c . queryItem "client" (toByteString' ac2)) alice $ \wsA2 -> WS.bracketR (c . queryItem "client" (toByteString' ac)) alice $ \wsA1 -> do - Util.postOtrBroadcastMessage id g alice ac msg !!! do + Util.postOtrBroadcastMessage id alice ac msg !!! do const 201 === statusCode assertTrue_ (eqMismatch [] [] [] . decodeBody) -- Bob should get the broadcast (team member of alice) @@ -921,29 +938,30 @@ postCryptoBroadcastMessageJson g b c a = do -- Alice's second client should get the broadcast void . liftIO $ WS.assertMatch t wsA2 (wsAssertOtr (selfConv alice) alice ac ac2 "ciphertext0") -postCryptoBroadcastMessageJson2 :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -postCryptoBroadcastMessageJson2 g b c a = do +postCryptoBroadcastMessageJson2 :: TestM () +postCryptoBroadcastMessageJson2 = do + c <- view tsCannon -- Team1: Alice, Bob. Team2: Charlie. Connect Alice,Charlie - (alice, ac) <- randomUserWithClient b (someLastPrekeys !! 0) - (bob, bc) <- randomUserWithClient b (someLastPrekeys !! 1) - (charlie,cc) <- randomUserWithClient b (someLastPrekeys !! 2) - connectUsers b alice (list1 charlie []) - tid1 <- createTeamInternal g "foo" alice - assertQueue "" a tActivate - addTeamMemberInternal g tid1 $ newTeamMember' (symmPermissions []) bob - assertQueue "" a $ tUpdate 2 [alice] + (alice, ac) <- randomUserWithClient (someLastPrekeys !! 0) + (bob, bc) <- randomUserWithClient (someLastPrekeys !! 1) + (charlie,cc) <- randomUserWithClient (someLastPrekeys !! 2) + connectUsers alice (list1 charlie []) + tid1 <- createTeamInternal "foo" alice + assertQueue "" tActivate + addTeamMemberInternal tid1 $ newTeamMember' (symmPermissions []) bob + assertQueue "" $ tUpdate 2 [alice] let t = 3 # Second -- WS receive timeout -- Missing charlie let m1 = [(bob, bc, "ciphertext1")] - Util.postOtrBroadcastMessage id g alice ac m1 !!! do + Util.postOtrBroadcastMessage id alice ac m1 !!! do const 412 === statusCode assertTrue "1: Only Charlie and his device" (eqMismatch [(charlie, Set.singleton cc)] [] [] . decodeBody) -- Complete WS.bracketR2 c bob charlie $ \(wsB, wsE) -> do let m2 = [(bob, bc, "ciphertext2"), (charlie, cc, "ciphertext2")] - Util.postOtrBroadcastMessage id g alice ac m2 !!! do + Util.postOtrBroadcastMessage id alice ac m2 !!! do const 201 === statusCode assertTrue "No devices expected" (eqMismatch [] [] [] . decodeBody) void . liftIO $ WS.assertMatch t wsB (wsAssertOtr (selfConv bob) alice ac bc "ciphertext2") @@ -952,7 +970,7 @@ postCryptoBroadcastMessageJson2 g b c a = do -- Redundant self WS.bracketR3 c alice bob charlie $ \(wsA, wsB, wsE) -> do let m3 = [(alice, ac, "ciphertext3"), (bob, bc, "ciphertext3"), (charlie, cc, "ciphertext3")] - Util.postOtrBroadcastMessage id g alice ac m3 !!! do + Util.postOtrBroadcastMessage id alice ac m3 !!! do const 201 === statusCode assertTrue "2: Only Alice and her device" (eqMismatch [] [(alice, Set.singleton ac)] [] . decodeBody) void . liftIO $ WS.assertMatch t wsB (wsAssertOtr (selfConv bob) alice ac bc "ciphertext3") @@ -962,37 +980,38 @@ postCryptoBroadcastMessageJson2 g b c a = do -- Deleted charlie WS.bracketR2 c bob charlie $ \(wsB, wsE) -> do - deleteClient b charlie cc (Just $ PlainTextPassword defPassword) !!! const 200 === statusCode + deleteClient charlie cc (Just $ PlainTextPassword defPassword) !!! const 200 === statusCode let m4 = [(bob, bc, "ciphertext4"), (charlie, cc, "ciphertext4")] - Util.postOtrBroadcastMessage id g alice ac m4 !!! do + Util.postOtrBroadcastMessage id alice ac m4 !!! do const 201 === statusCode assertTrue "3: Only Charlie and his device" (eqMismatch [] [] [(charlie, Set.singleton cc)] . decodeBody) void . liftIO $ WS.assertMatch t wsB (wsAssertOtr (selfConv bob) alice ac bc "ciphertext4") -- charlie should not get it assertNoMsg wsE (wsAssertOtr (selfConv charlie) alice ac cc "ciphertext4") -postCryptoBroadcastMessageProto :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -postCryptoBroadcastMessageProto g b c a = do +postCryptoBroadcastMessageProto :: TestM () +postCryptoBroadcastMessageProto = do -- similar to postCryptoBroadcastMessageJson except uses protobuf -- Team1: Alice, Bob. Team2: Charlie. Regular user: Dan. Connect Alice,Charlie,Dan - (alice, ac) <- randomUserWithClient b (someLastPrekeys !! 0) - (bob, bc) <- randomUserWithClient b (someLastPrekeys !! 1) - (charlie,cc) <- randomUserWithClient b (someLastPrekeys !! 2) - (dan, dc) <- randomUserWithClient b (someLastPrekeys !! 3) - connectUsers b alice (list1 charlie [dan]) - tid1 <- createTeamInternal g "foo" alice - assertQueue "" a tActivate - addTeamMemberInternal g tid1 $ newTeamMember' (symmPermissions []) bob - assertQueue "" a $ tUpdate 2 [alice] - _ <- createTeamInternal g "foo" charlie - assertQueue "" a tActivate + c <- view tsCannon + (alice, ac) <- randomUserWithClient (someLastPrekeys !! 0) + (bob, bc) <- randomUserWithClient (someLastPrekeys !! 1) + (charlie,cc) <- randomUserWithClient (someLastPrekeys !! 2) + (dan, dc) <- randomUserWithClient (someLastPrekeys !! 3) + connectUsers alice (list1 charlie [dan]) + tid1 <- createTeamInternal "foo" alice + assertQueue "" tActivate + addTeamMemberInternal tid1 $ newTeamMember' (symmPermissions []) bob + assertQueue "" $ tUpdate 2 [alice] + _ <- createTeamInternal "foo" charlie + assertQueue "" tActivate -- Complete: Alice broadcasts a message to Bob,Charlie,Dan let t = 1 # Second -- WS receive timeout let ciphertext = encodeCiphertext "hello bob" WS.bracketRN c [alice, bob, charlie, dan] $ \ws@[_, wsB, wsC, wsD] -> do let msg = otrRecipients [(bob, [(bc, ciphertext)]), (charlie, [(cc, ciphertext)]), (dan, [(dc, ciphertext)])] - Util.postProtoOtrBroadcast g alice ac msg !!! do + Util.postProtoOtrBroadcast alice ac msg !!! do const 201 === statusCode assertTrue_ (eqMismatch [] [] [] . decodeBody) -- Bob should get the broadcast (team member of alice) @@ -1004,26 +1023,27 @@ postCryptoBroadcastMessageProto g b c a = do -- Alice should not get her own broadcast WS.assertNoEvent timeout ws -postCryptoBroadcastMessageNoTeam :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -postCryptoBroadcastMessageNoTeam g b _ _ = do - (alice, ac) <- randomUserWithClient b (someLastPrekeys !! 0) - (bob, bc) <- randomUserWithClient b (someLastPrekeys !! 1) - connectUsers b alice (list1 bob []) +postCryptoBroadcastMessageNoTeam :: TestM () +postCryptoBroadcastMessageNoTeam = do + (alice, ac) <- randomUserWithClient (someLastPrekeys !! 0) + (bob, bc) <- randomUserWithClient (someLastPrekeys !! 1) + connectUsers alice (list1 bob []) let msg = [(bob, bc, "ciphertext1")] - Util.postOtrBroadcastMessage id g alice ac msg !!! const 404 === statusCode - -postCryptoBroadcastMessage100OrMaxConns :: Galley -> Brig -> Cannon -> Maybe Aws.Env -> Http () -postCryptoBroadcastMessage100OrMaxConns g b c a = do - (alice, ac) <- randomUserWithClient b (someLastPrekeys !! 0) - _ <- createTeamInternal g "foo" alice - assertQueue "" a tActivate + Util.postOtrBroadcastMessage id alice ac msg !!! const 404 === statusCode + +postCryptoBroadcastMessage100OrMaxConns :: TestM () +postCryptoBroadcastMessage100OrMaxConns = do + c <- view tsCannon + (alice, ac) <- randomUserWithClient (someLastPrekeys !! 0) + _ <- createTeamInternal "foo" alice + assertQueue "" tActivate ((bob, bc), others) <- createAndConnectUserWhileLimitNotReached alice (100 :: Int) [] (someLastPrekeys !! 1) - connectUsers b alice (list1 bob (fst <$> others)) + connectUsers alice (list1 bob (fst <$> others)) let t = 3 # Second -- WS receive timeout WS.bracketRN c (bob : (fst <$> others)) $ \ws -> do let f (u, clt) = (u, clt, "ciphertext") let msg = (bob, bc, "ciphertext") : (f <$> others) - Util.postOtrBroadcastMessage id g alice ac msg !!! do + Util.postOtrBroadcastMessage id alice ac msg !!! do const 201 === statusCode assertTrue_ (eqMismatch [] [] [] . decodeBody) void . liftIO $ WS.assertMatch t (Imports.head ws) (wsAssertOtr (selfConv bob) alice ac bc "ciphertext") @@ -1031,8 +1051,8 @@ postCryptoBroadcastMessage100OrMaxConns g b c a = do liftIO $ WS.assertMatch t wsU (wsAssertOtr (selfConv u) alice ac clt "ciphertext") where createAndConnectUserWhileLimitNotReached alice remaining acc pk = do - (uid, cid) <- randomUserWithClient b pk - (r1, r2) <- List1.head <$> connectUsersUnchecked b alice (singleton uid) + (uid, cid) <- randomUserWithClient pk + (r1, r2) <- List1.head <$> connectUsersUnchecked alice (singleton uid) case (statusCode r1, statusCode r2, remaining, acc) of (201, 200, 0, [] ) -> error "Need to connect with at least 1 user" (201, 200, 0, (x:xs)) -> return (x, xs) diff --git a/services/galley/test/integration/API/Util.hs b/services/galley/test/integration/API/Util.hs index 64869909754..c2f060c5ef4 100644 --- a/services/galley/test/integration/API/Util.hs +++ b/services/galley/test/integration/API/Util.hs @@ -21,8 +21,9 @@ import Galley.Types import Galley.Types.Teams hiding (EventType (..)) import Galley.Types.Teams.Intra import Gundeck.Types.Notification -import Test.Tasty.Cannon (Cannon, TimeoutUnit (..), (#)) +import Test.Tasty.Cannon (TimeoutUnit (..), (#)) import Test.Tasty.HUnit +import TestSetup import qualified Data.ByteString.Base64 as B64 import qualified Data.ByteString.Char8 as C @@ -32,32 +33,20 @@ import qualified Data.HashMap.Strict as HashMap import qualified Data.Map.Strict as Map import qualified Data.Set as Set import qualified Data.UUID as UUID -import qualified Galley.Aws as Aws import qualified Galley.Types.Proto as Proto import qualified Test.QuickCheck as Q import qualified Test.Tasty.Cannon as WS -type Galley = Request -> Request -type Brig = Request -> Request -type ResponseLBS = Response (Maybe Lazy.ByteString) - -data TestSetup = TestSetup - { manager :: Manager - , galley :: Galley - , brig :: Brig - , cannon :: Cannon - , awsEnv :: Maybe Aws.Env - , maxConvSize :: Word16 - } - +type ResponseLBS = Response (Maybe LByteString) ------------------------------------------------------------------------------- -- API Operations symmPermissions :: [Perm] -> Permissions symmPermissions p = let s = Set.fromList p in fromJust (newPermissions s s) -createTeam :: HasCallStack => Galley -> Text -> UserId -> [TeamMember] -> Http TeamId -createTeam g name owner mems = do +createTeam :: HasCallStack => Text -> UserId -> [TeamMember] -> TestM TeamId +createTeam name owner mems = do + g <- view tsGalley let mm = if null mems then Nothing else Just $ unsafeRange (take 127 mems) let nt = NonBindingNewTeam $ newNewTeam (unsafeRange name) (unsafeRange "icon") & newTeamMembers .~ mm resp <- post (g . path "/teams" . zUser owner . zConn "conn" . zType "access" . json nt) Galley -> TeamId -> TeamStatus -> Http () -changeTeamStatus g tid s = put +changeTeamStatus :: HasCallStack => TeamId -> TeamStatus -> TestM () +changeTeamStatus tid s = do + g <- view tsGalley + put ( g . paths ["i", "teams", toByteString' tid, "status"] . json (TeamStatusUpdate s Nothing) ) !!! const 200 === statusCode -createTeamInternal :: HasCallStack => Galley -> Text -> UserId -> Http TeamId -createTeamInternal g name owner = do - tid <- createTeamInternalNoActivate g name owner - changeTeamStatus g tid Active +createTeamInternal :: HasCallStack => Text -> UserId -> TestM TeamId +createTeamInternal name owner = do + tid <- createTeamInternalNoActivate name owner + changeTeamStatus tid Active return tid -createTeamInternalNoActivate :: HasCallStack => Galley -> Text -> UserId -> Http TeamId -createTeamInternalNoActivate g name owner = do +createTeamInternalNoActivate :: HasCallStack => Text -> UserId -> TestM TeamId +createTeamInternalNoActivate name owner = do + g <- view tsGalley tid <- randomId let nt = BindingNewTeam $ newNewTeam (unsafeRange name) (unsafeRange "icon") _ <- put (g . paths ["/i/teams", toByteString' tid] . zUser owner . zConn "conn" . zType "access" . json nt) Galley -> Text -> UserId -> Currency.Alpha -> Http TeamId -createTeamInternalWithCurrency g name owner cur = do - tid <- createTeamInternalNoActivate g name owner +createTeamInternalWithCurrency :: HasCallStack => Text -> UserId -> Currency.Alpha -> TestM TeamId +createTeamInternalWithCurrency name owner cur = do + g <- view tsGalley + tid <- createTeamInternalNoActivate name owner _ <- put (g . paths ["i", "teams", toByteString' tid, "status"] . json (TeamStatusUpdate Active $ Just cur)) !!! const 200 === statusCode return tid -getTeam :: HasCallStack => Galley -> UserId -> TeamId -> Http Team -getTeam g usr tid = do +getTeam :: HasCallStack => UserId -> TeamId -> TestM Team +getTeam usr tid = do + g <- view tsGalley r <- get (g . paths ["teams", toByteString' tid] . zUser usr) Galley -> UserId -> TeamId -> Http TeamMemberList -getTeamMembers g usr tid = do +getTeamMembers :: HasCallStack => UserId -> TeamId -> TestM TeamMemberList +getTeamMembers usr tid = do + g <- view tsGalley r <- get (g . paths ["teams", toByteString' tid, "members"] . zUser usr) Galley -> UserId -> TeamId -> UserId -> Http TeamMember -getTeamMember g usr tid mid = do +getTeamMember :: HasCallStack => UserId -> TeamId -> UserId -> TestM TeamMember +getTeamMember usr tid mid = do + g <- view tsGalley r <- get (g . paths ["teams", toByteString' tid, "members", toByteString' mid] . zUser usr) Galley -> TeamId -> UserId -> Http TeamMember -getTeamMemberInternal g tid mid = do +getTeamMemberInternal :: HasCallStack => TeamId -> UserId -> TestM TeamMember +getTeamMemberInternal tid mid = do + g <- view tsGalley r <- get (g . paths ["i", "teams", toByteString' tid, "members", toByteString' mid]) Galley -> UserId -> TeamId -> TeamMember -> Http () -addTeamMember g usr tid mem = do +addTeamMember :: HasCallStack => UserId -> TeamId -> TeamMember -> TestM () +addTeamMember usr tid mem = do + g <- view tsGalley let payload = json (newNewTeamMember mem) post (g . paths ["teams", toByteString' tid, "members"] . zUser usr . zConn "conn" .payload) !!! const 200 === statusCode -addTeamMemberInternal :: HasCallStack => Galley -> TeamId -> TeamMember -> Http () -addTeamMemberInternal g tid mem = do +addTeamMemberInternal :: HasCallStack => TeamId -> TeamMember -> TestM () +addTeamMemberInternal tid mem = do + g <- view tsGalley let payload = json (newNewTeamMember mem) post (g . paths ["i", "teams", toByteString' tid, "members"] . payload) !!! const 200 === statusCode -createTeamConv :: HasCallStack => Galley -> UserId -> TeamId -> [UserId] -> Maybe Text -> Maybe (Set Access) -> Maybe Milliseconds -> Http ConvId -createTeamConv g u tid us name acc mtimer = createTeamConvAccess g u tid us name acc Nothing mtimer +createTeamConv :: HasCallStack => UserId -> TeamId -> [UserId] -> Maybe Text -> Maybe (Set Access) -> Maybe Milliseconds -> TestM ConvId +createTeamConv u tid us name acc mtimer = createTeamConvAccess u tid us name acc Nothing mtimer -createTeamConvAccess :: HasCallStack => Galley -> UserId -> TeamId -> [UserId] -> Maybe Text -> Maybe (Set Access) -> Maybe AccessRole -> Maybe Milliseconds -> Http ConvId -createTeamConvAccess g u tid us name acc role mtimer = do - r <- createTeamConvAccessRaw g u tid us name acc role mtimer UserId -> TeamId -> [UserId] -> Maybe Text -> Maybe (Set Access) -> Maybe AccessRole -> Maybe Milliseconds -> TestM ConvId +createTeamConvAccess u tid us name acc role mtimer = do + r <- createTeamConvAccessRaw u tid us name acc role mtimer UserId -> TeamId -> [UserId] -> Maybe Text -> Maybe (Set Access) -> Maybe AccessRole -> Maybe Milliseconds -> Http ResponseLBS -createTeamConvAccessRaw g u tid us name acc role mtimer = do +createTeamConvAccessRaw :: UserId -> TeamId -> [UserId] -> Maybe Text -> Maybe (Set Access) -> Maybe AccessRole -> Maybe Milliseconds -> TestM ResponseLBS +createTeamConvAccessRaw u tid us name acc role mtimer = do + g <- view tsGalley let tinfo = ConvTeamInfo tid False let conv = NewConvUnmanaged $ NewConv us name (fromMaybe (Set.fromList []) acc) role (Just tinfo) mtimer Nothing @@ -146,8 +146,9 @@ createTeamConvAccessRaw g u tid us name acc role mtimer = do . json conv ) -updateTeamConv :: Galley -> UserId -> ConvId -> ConversationRename -> Http ResponseLBS -updateTeamConv g zusr convid upd = do +updateTeamConv :: UserId -> ConvId -> ConversationRename -> TestM ResponseLBS +updateTeamConv zusr convid upd = do + g <- view tsGalley put ( g . paths ["/conversations", toByteString' convid] . zUser zusr @@ -157,8 +158,9 @@ updateTeamConv g zusr convid upd = do ) -- | See Note [managed conversations] -createManagedConv :: HasCallStack => Galley -> UserId -> TeamId -> [UserId] -> Maybe Text -> Maybe (Set Access) -> Maybe Milliseconds -> Http ConvId -createManagedConv g u tid us name acc mtimer = do +createManagedConv :: HasCallStack => UserId -> TeamId -> [UserId] -> Maybe Text -> Maybe (Set Access) -> Maybe Milliseconds -> TestM ConvId +createManagedConv u tid us name acc mtimer = do + g <- view tsGalley let tinfo = ConvTeamInfo tid True let conv = NewConvManaged $ NewConv us name (fromMaybe (Set.fromList []) acc) Nothing (Just tinfo) mtimer Nothing @@ -172,60 +174,79 @@ createManagedConv g u tid us name acc mtimer = do UserId -> UserId -> Maybe Text -> TeamId -> Http ResponseLBS -createOne2OneTeamConv g u1 u2 n tid = do +createOne2OneTeamConv :: UserId -> UserId -> Maybe Text -> TeamId -> TestM ResponseLBS +createOne2OneTeamConv u1 u2 n tid = do + g <- view tsGalley let conv = NewConvUnmanaged $ NewConv [u2] n mempty Nothing (Just $ ConvTeamInfo tid False) Nothing Nothing post $ g . path "/conversations/one2one" . zUser u1 . zConn "conn" . zType "access" . json conv -postConv :: Galley -> UserId -> [UserId] -> Maybe Text -> [Access] -> Maybe AccessRole -> Maybe Milliseconds -> Http ResponseLBS -postConv g u us name a r mtimer = do +postConv :: UserId -> [UserId] -> Maybe Text -> [Access] -> Maybe AccessRole -> Maybe Milliseconds -> TestM ResponseLBS +postConv u us name a r mtimer = do + g <- view tsGalley let conv = NewConvUnmanaged $ NewConv us name (Set.fromList a) r Nothing mtimer Nothing post $ g . path "/conversations" . zUser u . zConn "conn" . zType "access" . json conv -postConvWithReceipt :: Galley -> UserId -> [UserId] -> Maybe Text -> [Access] -> Maybe AccessRole -> Maybe Milliseconds -> ReceiptMode -> Http ResponseLBS -postConvWithReceipt g u us name a r mtimer rcpt = do +postConvWithReceipt :: UserId -> [UserId] -> Maybe Text -> [Access] -> Maybe AccessRole -> Maybe Milliseconds -> ReceiptMode -> TestM ResponseLBS +postConvWithReceipt u us name a r mtimer rcpt = do + g <- view tsGalley let conv = NewConvUnmanaged $ NewConv us name (Set.fromList a) r Nothing mtimer (Just rcpt) post $ g . path "/conversations" . zUser u . zConn "conn" . zType "access" . json conv -postSelfConv :: Galley -> UserId -> Http ResponseLBS -postSelfConv g u = post $ g . path "/conversations/self" . zUser u . zConn "conn" . zType "access" +postSelfConv :: UserId -> TestM ResponseLBS +postSelfConv u = do + g <- view tsGalley + post $ g . path "/conversations/self" . zUser u . zConn "conn" . zType "access" -postO2OConv :: Galley -> UserId -> UserId -> Maybe Text -> Http ResponseLBS -postO2OConv g u1 u2 n = do +postO2OConv :: UserId -> UserId -> Maybe Text -> TestM ResponseLBS +postO2OConv u1 u2 n = do + g <- view tsGalley let conv = NewConvUnmanaged $ NewConv [u2] n mempty Nothing Nothing Nothing Nothing post $ g . path "/conversations/one2one" . zUser u1 . zConn "conn" . zType "access" . json conv -postConnectConv :: Galley -> UserId -> UserId -> Text -> Text -> Maybe Text -> Http ResponseLBS -postConnectConv g a b name msg email = post $ g - . path "/i/conversations/connect" - . zUser a - . zConn "conn" - . zType "access" - . json (Connect b (Just msg) (Just name) email) - -putConvAccept :: Galley -> UserId -> ConvId -> Http ResponseLBS -putConvAccept g invited cid = put $ g - . paths ["/i/conversations", C.pack $ show cid, "accept", "v2"] - . zUser invited - . zType "access" - . zConn "conn" - -postOtrMessage :: (Request -> Request) -> Galley -> UserId -> ClientId -> ConvId -> [(UserId, ClientId, Text)] -> Http ResponseLBS -postOtrMessage f g u d c rec = post $ g - . f - . paths ["conversations", toByteString' c, "otr", "messages"] - . zUser u . zConn "conn" - . zType "access" - . json (mkOtrPayload d rec) - -postOtrBroadcastMessage :: (Request -> Request) -> Galley -> UserId -> ClientId -> [(UserId, ClientId, Text)] -> Http ResponseLBS -postOtrBroadcastMessage f g u d rec = post $ g - . f - . paths ["broadcast", "otr", "messages"] - . zUser u . zConn "conn" - . zType "access" - . json (mkOtrPayload d rec) +postConnectConv :: UserId -> UserId -> Text -> Text -> Maybe Text -> TestM ResponseLBS +postConnectConv a b name msg email = do + g <- view tsGalley + post $ g + . path "/i/conversations/connect" + . zUser a + . zConn "conn" + . zType "access" + . json (Connect b (Just msg) (Just name) email) + +putConvAccept :: UserId -> ConvId -> TestM ResponseLBS +putConvAccept invited cid = do + g <- view tsGalley + put $ g + . paths ["/i/conversations", C.pack $ show cid, "accept", "v2"] + . zUser invited + . zType "access" + . zConn "conn" + +postOtrMessage :: (Request -> Request) + -> UserId + -> ClientId + -> ConvId + -> [(UserId, ClientId, Text)] + -> TestM ResponseLBS +postOtrMessage f u d c rec = do + g <- view tsGalley + post $ g + . f + . paths ["conversations", toByteString' c, "otr", "messages"] + . zUser u . zConn "conn" + . zType "access" + . json (mkOtrPayload d rec) + +postOtrBroadcastMessage :: (Request -> Request) -> UserId -> ClientId -> [(UserId, ClientId, Text)] -> TestM ResponseLBS +postOtrBroadcastMessage f u d rec = do + g <- view tsGalley + post $ g + . f + . paths ["broadcast", "otr", "messages"] + . zUser u . zConn "conn" + . zType "access" + . json (mkOtrPayload d rec) mkOtrPayload :: ClientId -> [(UserId, ClientId, Text)] -> Value mkOtrPayload sender rec = object @@ -240,16 +261,20 @@ mkOtrMessage (usr, clt, m) = (fn usr, HashMap.singleton (fn clt) m) fn :: (FromByteString a, ToByteString a) => a -> Text fn = fromJust . fromByteString . toByteString' -postProtoOtrMessage :: Galley -> UserId -> ClientId -> ConvId -> OtrRecipients -> Http ResponseLBS -postProtoOtrMessage g u d c rec = let m = runPut (encodeMessage $ mkOtrProtoMessage d rec) in post $ g +postProtoOtrMessage :: UserId -> ClientId -> ConvId -> OtrRecipients -> TestM ResponseLBS +postProtoOtrMessage u d c rec = do + g <- view tsGalley + let m = runPut (encodeMessage $ mkOtrProtoMessage d rec) in post $ g . paths ["conversations", toByteString' c, "otr", "messages"] . zUser u . zConn "conn" . zType "access" . contentProtobuf . bytes m -postProtoOtrBroadcast :: Galley -> UserId -> ClientId -> OtrRecipients -> Http ResponseLBS -postProtoOtrBroadcast g u d rec = let m = runPut (encodeMessage $ mkOtrProtoMessage d rec) in post $ g +postProtoOtrBroadcast :: UserId -> ClientId -> OtrRecipients -> TestM ResponseLBS +postProtoOtrBroadcast u d rec = do + g <- view tsGalley + let m = runPut (encodeMessage $ mkOtrProtoMessage d rec) in post $ g . paths ["broadcast", "otr", "messages"] . zUser u . zConn "conn" . zType "access" @@ -262,31 +287,38 @@ mkOtrProtoMessage sender rec = sndr = Proto.fromClientId sender in Proto.newOtrMessage sndr rcps & Proto.newOtrMessageData ?~ "data" -getConvs :: Galley -> UserId -> Maybe (Either [ConvId] ConvId) -> Maybe Int32 -> Http ResponseLBS -getConvs g u r s = get $ g - . path "/conversations" - . zUser u - . zConn "conn" - . zType "access" - . convRange r s - -getConv :: Galley -> UserId -> ConvId -> Http ResponseLBS -getConv g u c = get $ g - . paths ["conversations", toByteString' c] - . zUser u - . zConn "conn" - . zType "access" - -getConvIds :: Galley -> UserId -> Maybe (Either [ConvId] ConvId) -> Maybe Int32 -> Http ResponseLBS -getConvIds g u r s = get $ g - . path "/conversations/ids" - . zUser u - . zConn "conn" - . zType "access" - . convRange r s - -postMembers :: Galley -> UserId -> List1 UserId -> ConvId -> Http ResponseLBS -postMembers g u us c = do +getConvs :: UserId -> Maybe (Either [ConvId] ConvId) -> Maybe Int32 -> TestM ResponseLBS +getConvs u r s = do + g <- view tsGalley + get $ g + . path "/conversations" + . zUser u + . zConn "conn" + . zType "access" + . convRange r s + +getConv :: UserId -> ConvId -> TestM ResponseLBS +getConv u c = do + g <- view tsGalley + get $ g + . paths ["conversations", toByteString' c] + . zUser u + . zConn "conn" + . zType "access" + +getConvIds :: UserId -> Maybe (Either [ConvId] ConvId) -> Maybe Int32 -> TestM ResponseLBS +getConvIds u r s = do + g <- view tsGalley + get $ g + . path "/conversations/ids" + . zUser u + . zConn "conn" + . zType "access" + . convRange r s + +postMembers :: UserId -> List1 UserId -> ConvId -> TestM ResponseLBS +postMembers u us c = do + g <- view tsGalley let i = Invite us post $ g . paths ["conversations", toByteString' c, "members"] @@ -295,104 +327,130 @@ postMembers g u us c = do . zType "access" . json i -deleteMember :: Galley -> UserId -> UserId -> ConvId -> Http ResponseLBS -deleteMember g u1 u2 c = delete $ g - . zUser u1 - . paths ["conversations", toByteString' c, "members", toByteString' u2] - . zConn "conn" - . zType "access" - -getSelfMember :: Galley -> UserId -> ConvId -> Http ResponseLBS -getSelfMember g u c = get $ g - . paths ["conversations", toByteString' c, "self"] - . zUser u - . zConn "conn" - . zType "access" - -putMember :: Galley -> UserId -> MemberUpdate -> ConvId -> Http ResponseLBS -putMember g u m c = put $ g - . paths ["conversations", toByteString' c, "self"] - . zUser u - . zConn "conn" - . zType "access" - . json m - -postJoinConv :: Galley -> UserId -> ConvId -> Http ResponseLBS -postJoinConv g u c = post $ g - . paths ["/conversations", toByteString' c, "join"] - . zUser u - . zConn "conn" - . zType "access" - -postJoinCodeConv :: Galley -> UserId -> ConversationCode -> Http ResponseLBS -postJoinCodeConv g u j = post $ g - . paths ["/conversations", "join"] - . zUser u - . zConn "conn" - . zType "access" - . json j - -putAccessUpdate :: Galley -> UserId -> ConvId -> ConversationAccessUpdate -> Http ResponseLBS -putAccessUpdate g u c acc = put $ g - . paths ["/conversations", toByteString' c, "access"] - . zUser u - . zConn "conn" - . zType "access" - . json acc +deleteMember :: UserId -> UserId -> ConvId -> TestM ResponseLBS +deleteMember u1 u2 c = do + g <- view tsGalley + delete $ g + . zUser u1 + . paths ["conversations", toByteString' c, "members", toByteString' u2] + . zConn "conn" + . zType "access" + +getSelfMember :: UserId -> ConvId -> TestM ResponseLBS +getSelfMember u c = do + g <- view tsGalley + get $ g + . paths ["conversations", toByteString' c, "self"] + . zUser u + . zConn "conn" + . zType "access" + +putMember :: UserId -> MemberUpdate -> ConvId -> TestM ResponseLBS +putMember u m c = do + g <- view tsGalley + put $ g + . paths ["conversations", toByteString' c, "self"] + . zUser u + . zConn "conn" + . zType "access" + . json m + +postJoinConv :: UserId -> ConvId -> TestM ResponseLBS +postJoinConv u c = do + g <- view tsGalley + post $ g + . paths ["/conversations", toByteString' c, "join"] + . zUser u + . zConn "conn" + . zType "access" + +postJoinCodeConv :: UserId -> ConversationCode -> TestM ResponseLBS +postJoinCodeConv u j = do + g <- view tsGalley + post $ g + . paths ["/conversations", "join"] + . zUser u + . zConn "conn" + . zType "access" + . json j + +putAccessUpdate :: UserId -> ConvId -> ConversationAccessUpdate -> TestM ResponseLBS +putAccessUpdate u c acc = do + g <- view tsGalley + put $ g + . paths ["/conversations", toByteString' c, "access"] + . zUser u + . zConn "conn" + . zType "access" + . json acc putMessageTimerUpdate - :: Galley -> UserId -> ConvId -> ConversationMessageTimerUpdate -> Http ResponseLBS -putMessageTimerUpdate g u c acc = put $ g - . paths ["/conversations", toByteString' c, "message-timer"] - . zUser u - . zConn "conn" - . zType "access" - . json acc - -postConvCode :: Galley -> UserId -> ConvId -> Http ResponseLBS -postConvCode g u c = post $ g - . paths ["/conversations", toByteString' c, "code"] - . zUser u - . zConn "conn" - . zType "access" - -postConvCodeCheck :: Galley -> ConversationCode -> Http ResponseLBS -postConvCodeCheck g code = post $ g - . path "/conversations/code-check" - . json code - -getConvCode :: Galley -> UserId -> ConvId -> Http ResponseLBS -getConvCode g u c = get $ g - . paths ["/conversations", toByteString' c, "code"] - . zUser u - . zConn "conn" - . zType "access" - -deleteConvCode :: Galley -> UserId -> ConvId -> Http ResponseLBS -deleteConvCode g u c = delete $ g - . paths ["/conversations", toByteString' c, "code"] - . zUser u - . zConn "conn" - . zType "access" - -deleteClientInternal :: Galley -> UserId -> ClientId -> Http ResponseLBS -deleteClientInternal g u c = delete $ g - . zUser u - . zConn "conn" - . paths ["i", "clients", toByteString' c] - -deleteUser :: HasCallStack => Galley -> UserId -> Http () -deleteUser g u = delete (g . path "/i/user" . zUser u) !!! const 200 === statusCode - -assertConvMember :: HasCallStack => Galley -> UserId -> ConvId -> Http () -assertConvMember g u c = - getSelfMember g u c !!! do + :: UserId -> ConvId -> ConversationMessageTimerUpdate -> TestM ResponseLBS +putMessageTimerUpdate u c acc = do + g <- view tsGalley + put $ g + . paths ["/conversations", toByteString' c, "message-timer"] + . zUser u + . zConn "conn" + . zType "access" + . json acc + +postConvCode :: UserId -> ConvId -> TestM ResponseLBS +postConvCode u c = do + g <- view tsGalley + post $ g + . paths ["/conversations", toByteString' c, "code"] + . zUser u + . zConn "conn" + . zType "access" + +postConvCodeCheck :: ConversationCode -> TestM ResponseLBS +postConvCodeCheck code = do + g <- view tsGalley + post $ g + . path "/conversations/code-check" + . json code + +getConvCode :: UserId -> ConvId -> TestM ResponseLBS +getConvCode u c = do + g <- view tsGalley + get $ g + . paths ["/conversations", toByteString' c, "code"] + . zUser u + . zConn "conn" + . zType "access" + +deleteConvCode :: UserId -> ConvId -> TestM ResponseLBS +deleteConvCode u c = do + g <- view tsGalley + delete $ g + . paths ["/conversations", toByteString' c, "code"] + . zUser u + . zConn "conn" + . zType "access" + +deleteClientInternal :: UserId -> ClientId -> TestM ResponseLBS +deleteClientInternal u c = do + g <- view tsGalley + delete $ g + . zUser u + . zConn "conn" + . paths ["i", "clients", toByteString' c] + +deleteUser :: HasCallStack => UserId -> TestM () +deleteUser u = do + g <- view tsGalley + delete (g . path "/i/user" . zUser u) !!! const 200 === statusCode + +assertConvMember :: HasCallStack => UserId -> ConvId -> TestM () +assertConvMember u c = + getSelfMember u c !!! do const 200 === statusCode const (Just u) === (fmap memId <$> decodeBody) -assertNotConvMember :: HasCallStack => Galley -> UserId -> ConvId -> Http () -assertNotConvMember g u c = - getSelfMember g u c !!! do +assertNotConvMember :: HasCallStack => UserId -> ConvId -> TestM () +assertNotConvMember u c = + getSelfMember u c !!! do const 200 === statusCode const (Just Null) === decodeBody @@ -421,7 +479,7 @@ assertConv :: HasCallStack -> [UserId] -> Maybe Text -> Maybe Milliseconds - -> Http ConvId + -> TestM ConvId assertConv r t c s us n mt = do cId <- fromBS $ getHeader' "Location" r let cnv = decodeBody r :: Maybe Conversation @@ -498,7 +556,7 @@ wsAssertMemberLeave conv usr old n = do sorted (Just (EdMembers (Members m))) = Just (EdMembers (Members (sort m))) sorted x = x -assertNoMsg :: HasCallStack => WS.WebSocket -> (Notification -> Assertion) -> Http () +assertNoMsg :: HasCallStack => WS.WebSocket -> (Notification -> Assertion) -> TestM () assertNoMsg ws f = do x <- WS.awaitMatch (1 # Second) ws f liftIO $ case x of @@ -547,23 +605,22 @@ zType = header "Z-Type" -- TODO: it'd be nicer to just take a list here and handle the cases with 0 -- users differently -connectUsers :: Brig -> UserId -> List1 UserId -> Http () -connectUsers b u us = void $ connectUsersWith expect2xx b u us +connectUsers :: UserId -> List1 UserId -> TestM () +connectUsers u us = void $ connectUsersWith expect2xx u us -connectUsersUnchecked :: Brig - -> UserId +connectUsersUnchecked :: UserId -> List1 UserId - -> Http (List1 (Response (Maybe Lazy.ByteString), Response (Maybe Lazy.ByteString))) + -> TestM (List1 (Response (Maybe Lazy.ByteString), Response (Maybe Lazy.ByteString))) connectUsersUnchecked = connectUsersWith id connectUsersWith :: (Request -> Request) - -> Brig -> UserId -> List1 UserId - -> Http (List1 (Response (Maybe Lazy.ByteString), Response (Maybe Lazy.ByteString))) -connectUsersWith fn b u us = mapM connectTo us + -> TestM (List1 (Response (Maybe Lazy.ByteString), Response (Maybe Lazy.ByteString))) +connectUsersWith fn u us = mapM connectTo us where connectTo v = do + b <- view tsBrig r1 <- post ( b . zUser u . zConn "conn" @@ -581,40 +638,45 @@ connectUsersWith fn b u us = mapM connectTo us return (r1, r2) -- | A copy of 'putConnection' from Brig integration tests. -putConnection :: Brig -> UserId -> UserId -> Relation -> Http ResponseLBS -putConnection b from to r = put $ b - . paths ["/connections", toByteString' to] - . contentJson - . body payload - . zUser from - . zConn "conn" - where - payload = RequestBodyLBS . encode $ object [ "status" .= r ] - -randomUsers :: Brig -> Int -> Http [UserId] -randomUsers b n = replicateM n (randomUser b) - -randomUser :: HasCallStack => Brig -> Http UserId +putConnection :: UserId -> UserId -> Relation -> TestM ResponseLBS +putConnection from to r = do + b <- view tsBrig + put $ b + . paths ["/connections", toByteString' to] + . contentJson + . body payload + . zUser from + . zConn "conn" + where + payload = RequestBodyLBS . encode $ object [ "status" .= r ] + +randomUsers :: Int -> TestM [UserId] +randomUsers n = replicateM n randomUser + +randomUser :: HasCallStack => TestM UserId randomUser = randomUser' True -randomUser' :: HasCallStack => Bool -> Brig -> Http UserId -randomUser' hasPassword b = do +randomUser' :: HasCallStack => Bool -> TestM UserId +randomUser' hasPassword = do + b <- view tsBrig e <- liftIO randomEmail let p = object $ [ "name" .= fromEmail e, "email" .= fromEmail e] <> [ "password" .= defPassword | hasPassword] r <- post (b . path "/i/users" . json p) Brig -> Http UserId -ephemeralUser b = do +ephemeralUser :: HasCallStack => TestM UserId +ephemeralUser = do + b <- view tsBrig name <- UUID.toText <$> liftIO nextRandom let p = object [ "name" .= name ] r <- post (b . path "/register" . json p) Brig -> UserId -> LastPrekey -> Http ClientId -randomClient b usr lk = do +randomClient :: HasCallStack => UserId -> LastPrekey -> TestM ClientId +randomClient usr lk = do + b <- view tsBrig q <- post (b . path "/clients" . zUser usr . zConn "conn" . json newClientBody) Brig -> Bool -> UserId -> UserId -> Http () -ensureDeletedState b check from u = +ensureDeletedState :: HasCallStack => Bool -> UserId -> UserId -> TestM () +ensureDeletedState check from u = do + b <- view tsBrig get ( b . paths ["users", toByteString' u] . zUser from @@ -632,21 +695,24 @@ ensureDeletedState b check from u = ) !!! const (Just check) === fmap profileDeleted . decodeBody -- TODO: Refactor, as used also in brig -deleteClient :: Brig -> UserId -> ClientId -> Maybe PlainTextPassword -> Http ResponseLBS -deleteClient b u c pw = delete $ b - . paths ["clients", toByteString' c] - . zUser u - . zConn "conn" - . contentJson - . body payload - where - payload = RequestBodyLBS . encode $ object - [ "password" .= pw - ] +deleteClient :: UserId -> ClientId -> Maybe PlainTextPassword -> TestM ResponseLBS +deleteClient u c pw = do + b <- view tsBrig + delete $ b + . paths ["clients", toByteString' c] + . zUser u + . zConn "conn" + . contentJson + . body payload + where + payload = RequestBodyLBS . encode $ object + [ "password" .= pw + ] -- TODO: Refactor, as used also in brig -isUserDeleted :: HasCallStack => Brig -> UserId -> Http Bool -isUserDeleted b u = do +isUserDeleted :: HasCallStack => UserId -> TestM Bool +isUserDeleted u = do + b <- view tsBrig r <- get (b . paths ["i", "users", toByteString' u, "status"]) Just a _ -> Nothing -isMember :: Galley -> UserId -> ConvId -> Http Bool -isMember g usr cnv = do +isMember :: UserId -> ConvId -> TestM Bool +isMember usr cnv = do + g <- view tsGalley res <- get $ g . paths ["i", "conversations", toByteString' cnv, "members", toByteString' usr] . expect2xx @@ -683,13 +750,13 @@ someLastPrekeys = , lastPrekey "pQABARn//wKhAFgg1rZEY6vbAnEz+Ern5kRny/uKiIrXTb/usQxGnceV2HADoQChAFgglacihnqg/YQJHkuHNFU7QD6Pb3KN4FnubaCF2EVOgRkE9g==" ] -randomUserWithClient :: Brig -> LastPrekey -> Http (UserId, ClientId) -randomUserWithClient b lk = do - u <- randomUser b - c <- randomClient b u lk +randomUserWithClient :: LastPrekey -> TestM (UserId, ClientId) +randomUserWithClient lk = do + u <- randomUser + c <- randomClient u lk return (u, c) -newNonce :: Http (Id ()) +newNonce :: TestM (Id ()) newNonce = randomId decodeBody :: (HasCallStack, FromJSON a) => Response (Maybe Lazy.ByteString) -> Maybe a diff --git a/services/galley/test/integration/Main.hs b/services/galley/test/integration/Main.hs index 6ec7db0c305..641be0de9fa 100644 --- a/services/galley/test/integration/Main.hs +++ b/services/galley/test/integration/Main.hs @@ -20,9 +20,9 @@ import Test.Tasty.Options import Util.Options import Util.Options.Common import Util.Test +import TestSetup (TestSetup(..)) import qualified API -import qualified API.Util as Util import qualified API.SQS as SQS import qualified Data.ByteString.Char8 as BS @@ -79,8 +79,8 @@ main = withOpenSSL $ runTests go e <- join <$> optOrEnvSafe endpoint gConf (fromByteString . BS.pack) "GALLEY_SQS_ENDPOINT" convMaxSize <- optOrEnv maxSize gConf read "CONV_MAX_SIZE" awsEnv <- initAwsEnv e q - SQS.ensureQueueEmpty awsEnv - return $ Util.TestSetup m g b c awsEnv convMaxSize + SQS.ensureQueueEmptyIO awsEnv + return $ TestSetup m g b c awsEnv convMaxSize queueName = fmap (view awsQueueName) . view optJournal endpoint = fmap (view awsEndpoint) . view optJournal diff --git a/services/galley/test/integration/TestSetup.hs b/services/galley/test/integration/TestSetup.hs new file mode 100644 index 00000000000..63df987a4f5 --- /dev/null +++ b/services/galley/test/integration/TestSetup.hs @@ -0,0 +1,62 @@ +{-# LANGUAGE GeneralizedNewtypeDeriving #-} +{-# OPTIONS_GHC -fprint-potential-instances #-} +module TestSetup + ( test + , tsManager + , tsGalley + , tsBrig + , tsCannon + , tsAwsEnv + , tsMaxConvSize + , TestM(..) + , TestSetup(..) + ) where + +import Imports +import Test.Tasty (TestName, TestTree) +import Test.Tasty.HUnit (Assertion, testCase) +import Control.Lens ((^.), makeLenses) +import Control.Monad.Catch (MonadCatch, MonadMask, MonadThrow) +import Bilge (HttpT(..), Manager, MonadHttp, Request, runHttpT) + +import qualified Galley.Aws as Aws + +newtype TestM a = + TestM { runTestM :: ReaderT TestSetup (HttpT IO) a + } + deriving ( Functor + , Applicative + , Monad + , MonadReader TestSetup + , MonadIO + , MonadCatch + , MonadThrow + , MonadMask + , MonadHttp + , MonadUnliftIO + ) + +type GalleyR = Request -> Request +type BrigR = Request -> Request +type CannonR = Request -> Request + +data TestSetup = TestSetup + { _tsManager :: Manager + , _tsGalley :: GalleyR + , _tsBrig :: BrigR + , _tsCannon :: CannonR + , _tsAwsEnv :: Maybe Aws.Env + , _tsMaxConvSize :: Word16 + } + +makeLenses ''TestSetup + + +test :: IO TestSetup -> TestName -> TestM a -> TestTree +test s n h = testCase n runTest + where + runTest :: Assertion + runTest = do + setup <- s + void . runHttpT (setup ^. tsManager) . flip runReaderT setup . runTestM $ h + From d4bd18caaf1141fa586674d0a3b299cce57b0419 Mon Sep 17 00:00:00 2001 From: Chris Penner Date: Wed, 20 Mar 2019 13:14:07 +0100 Subject: [PATCH 15/23] Checking for 404 is flaky; depends on deletion succeeding (#667) --- services/spar/test-integration/Test/Spar/Scim/UserSpec.hs | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/services/spar/test-integration/Test/Spar/Scim/UserSpec.hs b/services/spar/test-integration/Test/Spar/Scim/UserSpec.hs index 2ce385bfef9..3e49bf7652c 100644 --- a/services/spar/test-integration/Test/Spar/Scim/UserSpec.hs +++ b/services/spar/test-integration/Test/Spar/Scim/UserSpec.hs @@ -519,17 +519,21 @@ specDeleteUser = do !!! const 405 === statusCode describe "DELETE /Users/:id" $ do - it "when called twice, should first delete then 404 you" $ do + it "should respond with 204" $ do (tok, _) <- registerIdPAndScimToken user <- randomScimUser storedUser <- createUser tok user let uid = scimUserId storedUser spar <- view teSpar + -- Expect first call to succeed deleteUser_ (Just tok) (Just uid) spar !!! const 204 === statusCode + -- The second call may return either of 204 or 404 depending on whether Brig has + -- finished deletion. This assertion is here to document that this is currently + -- the expected behaviour deleteUser_ (Just tok) (Just uid) spar - !!! const 404 === statusCode -- https://tools.ietf.org/html/rfc7644#section-3.6 + !!! assertTrue "expected one of 204, 404" ((`elem` [204, 404]) . statusCode) -- FUTUREWORK: hscim has the the following test. we should probably go through all -- `delete` tests and see if they can move to hscim or are already included there. From 5df850d31b78a70bae1328950ab493fefbef920b Mon Sep 17 00:00:00 2001 From: Julia Longtin Date: Wed, 20 Mar 2019 14:42:46 +0100 Subject: [PATCH 16/23] take some missed feedback into account from PR #622 (#668) --- deploy/docker-ephemeral/build/README.md | 8 ++++---- docs/reference/make-docker-and-qemu.md | 6 +++--- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/deploy/docker-ephemeral/build/README.md b/deploy/docker-ephemeral/build/README.md index 0ce32305265..59fbb2ba86d 100644 --- a/deploy/docker-ephemeral/build/README.md +++ b/deploy/docker-ephemeral/build/README.md @@ -29,18 +29,18 @@ to build an individual image (and it's dependent images), run "make-" ## Using with Dockerhub -If you want to upload images to dockerhub, you must go to dockerhub, and create repositories under your user with the names of the images you want to upload. Again, to get the list of names buildable with this Makefile, type 'make names'. +If you want to upload images to dockerhub, you must go to dockerhub, and create repositories under your user with the names of the images you want to upload. use `make names` to get the list of buildable images. If you don't want to change the Makefile, add the DOCKER_USERNAME, DOCKER_EMAIL, and DOCKER_REALNAME environment variables. -For instance, when I want to build all debian images, and upload them to dockerhub, i use: +For instance, when I want to build all debian images, and upload them to dockerhub I use: ```bash make DIST=DEBIAN DOCKER_USERNAME=julialongtin DOCKER_EMAIL=julia.longtin@wire.com DOCKER_REALNAME='Julia Longtin' push-all ``` -You can also push a single image (and it's dependencies) with "make push-". +You can also push a single image (and it's dependent images) with "make push-". -If you want your builds to go faster, and are good with having more garbled output, use the '-j' argument to make, to parallize the builds. +If you want your builds to go faster, and are okay with having interleaved output from multiple builds, use the '-j' argument to make, to parallize the builds. '-j' can take an integer argument for the number of threads you want it to run at once, or no argument for 'all of the things you can figure out how to do at once'. By default this makefile builds and uploads the debian based images. Use the 'DIST=ALPINE' environment variable to build the alpine based images instead. diff --git a/docs/reference/make-docker-and-qemu.md b/docs/reference/make-docker-and-qemu.md index c4ab362d27c..3088eda1a0d 100644 --- a/docs/reference/make-docker-and-qemu.md +++ b/docs/reference/make-docker-and-qemu.md @@ -80,7 +80,7 @@ That wasn't so bad, was it? The sed commands used above accomplished two things. One, they changed out the MAINTAINER line in the Dockerfile, to indicate that I am the maintainer of this docker image. Two, for the 386 image, it specified that Docker was to start by using the i386 version of debian to base the image off of, not the AMD64 version. we did not need to make that change to the AMD64 version of the Dockerfile, because Docker on our local machine automatically downloads AMD64 images, since our copies of docker were built on AMD64 machines. ##### OK, what was the --amend on the docker manifest create line? -Docker creates manifest files, and stores them in your local docker. I haven't found a good way to remove them, so instead, I add --amend, so that docker changes the local file, instead of just telling you it already exists, and failing. +Docker creates manifest files, and stores them in your local docker. I haven't found a good way to remove them, so instead, I add --amend, so that if one already exists, docker overwrites the locally stored file, instead of just telling you one already exists, and exiting with an error. ##### What does a manifest file look like? to look at a manifest file (local or remote), use 'docker manifest inspect'. for example, here's the original namshi/smtp manifest. @@ -161,8 +161,8 @@ This is very different. instead of showing layers, it has the SHASUMs of the ima That's it as far as the docker parts of this. Simple, right? :) ### Limits of Manifest files: -I can't figgure out how to delete local manifest files. -I haven't figgured out how to point to local images in a manifest file. this means if we use the name of a manifest in our Docker compose configuration, docker will go out to dockerhub for the image, rather than using a new image we just built, and we have to build a manifest file AFTER the push do dockerhub has been completed. +I can't figure out how to delete local manifest files. +I haven't figured out how to point to local images in a manifest file. this means if we use the name of a manifest in our Docker compose configuration, docker will go out to dockerhub for the image, rather than using a new image we just built, and we have to build a manifest file AFTER the push do dockerhub has been completed. ## QEMU + BinFmt Support: From 50573c815fd86270ab406396d33137d7b15f7ee6 Mon Sep 17 00:00:00 2001 From: fisx Date: Thu, 21 Mar 2019 11:18:30 +0100 Subject: [PATCH 17/23] Bump cql-io dep from merge request to latest release. (#661) * Bump cql-io dep from merge request to latest release. * Add LoggerT with needed dependencies * Fix broken gundeck test which now requires a logger * Tweaks & Refactoring --- .linting/duplicate-ids-whitelist.txt | 7 + libs/cassandra-util/package.yaml | 3 +- libs/cassandra-util/src/Cassandra/Schema.hs | 14 +- libs/cassandra-util/src/Cassandra/Settings.hs | 3 + libs/cassandra-util/src/Cassandra/Util.hs | 16 +- libs/extended/package.yaml | 2 + libs/extended/src/System/Logger/Extended.hs | 32 + package-defaults.yaml | 1 + services/brig/index/src/Eval.hs | 3 +- services/brig/src/Brig/App.hs | 5 +- services/galley/journaler/src/Main.hs | 3 +- services/galley/src/Galley/App.hs | 5 +- services/gundeck/package.yaml | 1 + services/gundeck/src/Gundeck/Env.hs | 5 +- services/gundeck/test/integration/API.hs | 550 +++++++++--------- services/gundeck/test/integration/Main.hs | 13 +- .../gundeck/test/integration/TestSetup.hs | 67 +++ services/gundeck/test/integration/Types.hs | 7 - services/spar/src/Spar/Run.hs | 3 +- snapshots/wire-1.2.yaml | 6 + stack.yaml | 2 +- tools/db/auto-whitelist/src/Main.hs | 3 +- tools/db/service-backfill/src/Main.hs | 3 +- 23 files changed, 445 insertions(+), 309 deletions(-) create mode 100644 services/gundeck/test/integration/TestSetup.hs delete mode 100644 services/gundeck/test/integration/Types.hs create mode 100644 snapshots/wire-1.2.yaml diff --git a/.linting/duplicate-ids-whitelist.txt b/.linting/duplicate-ids-whitelist.txt index 7db0a5822e4..54192d56961 100644 --- a/.linting/duplicate-ids-whitelist.txt +++ b/.linting/duplicate-ids-whitelist.txt @@ -649,3 +649,10 @@ zAuthAccess 2 zConn 5 zUser 6 zauth 3 +BrigR 2 +CannonR 2 +TestM 2 +_tsBrig 2 +_tsCannon 2 +_tsManager 2 +runTestM 2 diff --git a/libs/cassandra-util/package.yaml b/libs/cassandra-util/package.yaml index 49e477ad64d..468218d3316 100644 --- a/libs/cassandra-util/package.yaml +++ b/libs/cassandra-util/package.yaml @@ -1,4 +1,4 @@ -defaults: +defaults: local: ../../package-defaults.yaml name: cassandra-util version: '0.16.5' @@ -14,6 +14,7 @@ dependencies: - conduit - cql >=3.0.0 - cql-io >=0.14 +- cql-io-tinylog - dns >=3.0 - errors >=1.4 - exceptions >=0.6 diff --git a/libs/cassandra-util/src/Cassandra/Schema.hs b/libs/cassandra-util/src/Cassandra/Schema.hs index 136e547cb6f..ed0b0f2ec8f 100644 --- a/libs/cassandra-util/src/Cassandra/Schema.hs +++ b/libs/cassandra-util/src/Cassandra/Schema.hs @@ -33,10 +33,11 @@ import Data.UUID (UUID) import Database.CQL.IO import Database.CQL.Protocol (Request (..), Query (..)) import Options.Applicative hiding (info) -import System.Logger (Logger, Level (..), log, msg) -import qualified Data.Text.Lazy as LT +import qualified Database.CQL.IO.Tinylog as CT import qualified Data.List.NonEmpty as NonEmpty +import qualified Data.Text.Lazy as LT +import qualified System.Logger as Log data Migration = Migration { migVersion :: Int32 @@ -128,11 +129,12 @@ useKeyspace (Keyspace k) = void . getResult =<< qry prms = QueryParams One False () Nothing Nothing Nothing Nothing cql = QueryString $ "use \"" <> fromStrict k <> "\"" -migrateSchema :: Logger -> MigrationOpts -> [Migration] -> IO () +migrateSchema :: Log.Logger -> MigrationOpts -> [Migration] -> IO () migrateSchema l o ms = do hosts <- initialContactsPlain $ pack (migHost o) - p <- Database.CQL.IO.init l $ - setContacts (NonEmpty.head hosts) (NonEmpty.tail hosts) + p <- Database.CQL.IO.init $ + setLogger (CT.mkLogger l) + . setContacts (NonEmpty.head hosts) (NonEmpty.tail hosts) . setPortNumber (fromIntegral $ migPort o) . setMaxConnections 1 . setPoolStripes 1 @@ -174,7 +176,7 @@ migrateSchema l o ms = do . sortBy (\x y -> migVersion x `compare` migVersion y) $ ms - info = liftIO . log l Info . msg + info = liftIO . Log.log l Log.Info . Log.msg dropKeyspace :: Keyspace -> QueryString S () () dropKeyspace (Keyspace k) = QueryString $ "drop keyspace if exists \"" <> fromStrict k <> "\"" diff --git a/libs/cassandra-util/src/Cassandra/Settings.hs b/libs/cassandra-util/src/Cassandra/Settings.hs index 7791c35ed49..1827bef6f41 100644 --- a/libs/cassandra-util/src/Cassandra/Settings.hs +++ b/libs/cassandra-util/src/Cassandra/Settings.hs @@ -21,6 +21,8 @@ module Cassandra.Settings , setResponseTimeout , setRetrySettings , setPolicy + , setLogger + , mkLogger , initialContactsDisco , initialContactsPlain ) where @@ -29,6 +31,7 @@ import Imports import Control.Lens import Data.Aeson.Lens import Database.CQL.IO hiding (values) +import Database.CQL.IO.Tinylog (mkLogger) import Data.List.NonEmpty (NonEmpty (..)) import Data.Text (pack, stripSuffix, unpack) import Network.Wreq diff --git a/libs/cassandra-util/src/Cassandra/Util.hs b/libs/cassandra-util/src/Cassandra/Util.hs index 3df241f5883..ee47ee24c0e 100644 --- a/libs/cassandra-util/src/Cassandra/Util.hs +++ b/libs/cassandra-util/src/Cassandra/Util.hs @@ -6,16 +6,20 @@ import Cassandra.Settings import Data.Text (unpack) import Data.Time (UTCTime) import Data.Time.Clock.POSIX(posixSecondsToUTCTime) -import System.Logger (Logger) + +import qualified Database.CQL.IO.Tinylog as CT +import qualified System.Logger as Log type Writetime a = Int64 writeTimeToUTC :: Writetime a -> UTCTime writeTimeToUTC = posixSecondsToUTCTime . fromIntegral . (`div` 1000000) -defInitCassandra :: Text -> Text -> Word16 -> Logger -> IO ClientState +defInitCassandra :: Text -> Text -> Word16 -> Log.Logger -> IO ClientState defInitCassandra ks h p lg = - init lg $ setPortNumber (fromIntegral p) - . setContacts (unpack h) [] - . setKeyspace (Keyspace ks) - $ defSettings + init + $ setLogger (CT.mkLogger lg) + . setPortNumber (fromIntegral p) + . setContacts (unpack h) [] + . setKeyspace (Keyspace ks) + $ defSettings diff --git a/libs/extended/package.yaml b/libs/extended/package.yaml index deee7361404..d06fe0db63b 100644 --- a/libs/extended/package.yaml +++ b/libs/extended/package.yaml @@ -20,6 +20,8 @@ dependencies: - optparse-applicative - tinylog - unliftio +- cql-io +- exceptions library: source-dirs: src stability: experimental diff --git a/libs/extended/src/System/Logger/Extended.hs b/libs/extended/src/System/Logger/Extended.hs index bd24ff6801e..5d251a25fb9 100644 --- a/libs/extended/src/System/Logger/Extended.hs +++ b/libs/extended/src/System/Logger/Extended.hs @@ -1,12 +1,19 @@ +{-# LANGUAGE GeneralizedNewtypeDeriving #-} +{-# LANGUAGE DerivingStrategies #-} -- | Tinylog convenience things. module System.Logger.Extended ( mkLogger , mkLogger' + , LoggerT(..) + , runWithLogger ) where import Imports +import Control.Monad.Catch +import Database.CQL.IO import qualified System.Logger as Log +import qualified System.Logger.Class as LC mkLogger :: Log.Level -> Bool -> IO Log.Logger mkLogger lvl netstr = Log.new' @@ -23,3 +30,28 @@ mkLogger' = Log.new . Log.setOutput Log.StdOut . Log.setFormat Nothing $ Log.defSettings + +-- | It's a bit odd that we mention 'MonadClient' from the cql-io package here, but it's the +-- easiest way to get things done. Alternatively, we could introduce 'LoggerT' in the gundeck +-- integration tests, which is the only place in the world where it is currently used, but we +-- may need it elsewhere in the future and here it's easier to find. +newtype LoggerT m a = LoggerT {runLoggerT :: ReaderT Log.Logger m a} + deriving newtype + (Functor + , Applicative + , Monad + , MonadIO + , MonadThrow + , MonadCatch + , MonadMask + , MonadClient + ) + +instance (MonadIO m) => LC.MonadLogger (LoggerT m) where + log :: LC.Level -> (LC.Msg -> LC.Msg) -> LoggerT m () + log l m = LoggerT $ do + logger <- ask + Log.log logger l m + +runWithLogger :: Log.Logger -> LoggerT m a -> m a +runWithLogger logger = flip runReaderT logger . runLoggerT diff --git a/package-defaults.yaml b/package-defaults.yaml index 5f48511d907..73952a67d7f 100644 --- a/package-defaults.yaml +++ b/package-defaults.yaml @@ -11,6 +11,7 @@ default-extensions: - ConstraintKinds - DataKinds - DefaultSignatures +- DerivingStrategies - DeriveFunctor - DeriveGeneric - DeriveLift diff --git a/services/brig/index/src/Eval.hs b/services/brig/index/src/Eval.hs index 591a5d00e54..706faf297fd 100644 --- a/services/brig/index/src/Eval.hs +++ b/services/brig/index/src/Eval.hs @@ -42,7 +42,8 @@ runCommand l = \case <$> newManager defaultManagerSettings initDb cas - = C.init l + = C.init + $ C.setLogger (C.mkLogger l) . C.setContacts (view cHost cas) [] . C.setPortNumber (fromIntegral (view cPort cas)) . C.setKeyspace (view cKeyspace cas) diff --git a/services/brig/src/Brig/App.hs b/services/brig/src/Brig/App.hs index daf5f531914..a23011c6aac 100644 --- a/services/brig/src/Brig/App.hs +++ b/services/brig/src/Brig/App.hs @@ -352,8 +352,9 @@ initCassandra o g = do c <- maybe (Cas.initialContactsPlain ((Opt.cassandra o)^.casEndpoint.epHost)) (Cas.initialContactsDisco "cassandra_brig") (unpack <$> Opt.discoUrl o) - p <- Cas.init (Log.clone (Just "cassandra.brig") g) - $ Cas.setContacts (NE.head c) (NE.tail c) + p <- Cas.init + $ Cas.setLogger (Cas.mkLogger (Log.clone (Just "cassandra.brig") g)) + . Cas.setContacts (NE.head c) (NE.tail c) . Cas.setPortNumber (fromIntegral ((Opt.cassandra o)^.casEndpoint.epPort)) . Cas.setKeyspace (Keyspace ((Opt.cassandra o)^.casKeyspace)) . Cas.setMaxConnections 4 diff --git a/services/galley/journaler/src/Main.hs b/services/galley/journaler/src/Main.hs index a6cc2132f09..105d882d6a9 100644 --- a/services/galley/journaler/src/Main.hs +++ b/services/galley/journaler/src/Main.hs @@ -60,7 +60,8 @@ main = withOpenSSL $ do Aws.mkEnv l mgr o initCas cas l - = C.init l + = C.init + $ C.setLogger (C.mkLogger l) . C.setContacts (cas^.cHosts) [] . C.setPortNumber (fromIntegral $ cas^.cPort) . C.setKeyspace (cas^.cKeyspace) diff --git a/services/galley/src/Galley/App.hs b/services/galley/src/Galley/App.hs index be2454ecca1..98da107812f 100644 --- a/services/galley/src/Galley/App.hs +++ b/services/galley/src/Galley/App.hs @@ -138,8 +138,9 @@ initCassandra o l = do c <- maybe (C.initialContactsPlain (o^.optCassandra.casEndpoint.epHost)) (C.initialContactsDisco "cassandra_galley") (unpack <$> o^.optDiscoUrl) - C.init (Logger.clone (Just "cassandra.galley") l) $ - C.setContacts (NE.head c) (NE.tail c) + C.init + . C.setLogger (C.mkLogger (Logger.clone (Just "cassandra.galley") l)) + . C.setContacts (NE.head c) (NE.tail c) . C.setPortNumber (fromIntegral $ o^.optCassandra.casEndpoint.epPort) . C.setKeyspace (Keyspace $ o^.optCassandra.casKeyspace) . C.setMaxConnections 4 diff --git a/services/gundeck/package.yaml b/services/gundeck/package.yaml index ea889163da2..d6b9edf4b50 100644 --- a/services/gundeck/package.yaml +++ b/services/gundeck/package.yaml @@ -123,6 +123,7 @@ executables: - brig-types - cassandra-util - containers + - exceptions - gundeck - gundeck-types - http-client diff --git a/services/gundeck/src/Gundeck/Env.hs b/services/gundeck/src/Gundeck/Env.hs index 34cb9c314d4..94aa49f4ccc 100644 --- a/services/gundeck/src/Gundeck/Env.hs +++ b/services/gundeck/src/Gundeck/Env.hs @@ -59,8 +59,9 @@ createEnv m o = do . Redis.setConnectTimeout 3 . Redis.setSendRecvTimeout 5 $ Redis.defSettings - p <- C.init (Logger.clone (Just "cassandra.gundeck") l) $ - C.setContacts (NE.head c) (NE.tail c) + p <- C.init $ + C.setLogger (C.mkLogger (Logger.clone (Just "cassandra.gundeck") l)) + . C.setContacts (NE.head c) (NE.tail c) . C.setPortNumber (fromIntegral $ o^.optCassandra.casEndpoint.epPort) . C.setKeyspace (Keyspace (o^.optCassandra.casKeyspace)) . C.setMaxConnections 4 diff --git a/services/gundeck/test/integration/API.hs b/services/gundeck/test/integration/API.hs index b1cc4e48a46..043154baeb0 100644 --- a/services/gundeck/test/integration/API.hs +++ b/services/gundeck/test/integration/API.hs @@ -22,9 +22,9 @@ import Network.URI (parseURI) import Safe import System.Random (randomIO) import System.Timeout (timeout) +import TestSetup import Test.Tasty import Test.Tasty.HUnit -import Types import qualified Cassandra as Cql import qualified Data.Aeson.Types as Aeson @@ -41,36 +41,11 @@ import qualified Gundeck.Push.Data as Push import qualified Network.HTTP.Client as Http import qualified Network.WebSockets as WS import qualified Prelude +import qualified System.Logger.Extended as Log appName :: AppName appName = AppName "test" -data TestSetup = TestSetup - { manager :: Manager - , gundeck :: Gundeck - , cannon :: Cannon - , cannon2 :: Cannon - , brig :: Brig - , cass :: Cql.ClientState - } - -type TestSignature a = Gundeck -> Cannon -> Brig -> Cql.ClientState -> Http a -type TestSignature2 a = Gundeck -> Cannon -> Cannon -> Brig -> Cql.ClientState -> Http a - -test :: IO TestSetup -> TestName -> (TestSignature a) -> TestTree -test setup n h = testCase n runTest - where - runTest = do - s <- setup - void $ runHttpT (manager s) (h (gundeck s) (cannon s) (brig s) (cass s)) - -test2 :: IO TestSetup -> TestName -> (TestSignature2 a) -> TestTree -test2 setup n h = testCase n runTest - where - runTest = do - s <- setup - void $ runHttpT (manager s) (h (gundeck s) (cannon s) (cannon2 s) (brig s) (cass s)) - tests :: IO TestSetup -> TestTree tests s = testGroup "Gundeck integration tests" [ testGroup "Push" @@ -79,8 +54,8 @@ tests s = testGroup "Gundeck integration tests" [ , test s "Replace presence" $ replacePresence , test s "Remove stale presence" $ removeStalePresence , test s "Single user push" $ singleUserPush - , test2 s "Push many to Cannon via bulkpush (via gundeck; group notif)" $ bulkPush False 50 8 - , test2 s "Push many to Cannon via bulkpush (via gundeck; e2e notif)" $ bulkPush True 50 8 + , test s "Push many to Cannon via bulkpush (via gundeck; group notif)" $ bulkPush False 50 8 + , test s "Push many to Cannon via bulkpush (via gundeck; e2e notif)" $ bulkPush True 50 8 , test s "Send a push, ensure origin does not receive it" $ sendSingleUserNoPiggyback , test s "Targeted push by connection" $ targetConnectionPush , test s "Targeted push by client" $ targetClientPush @@ -123,34 +98,39 @@ tests s = testGroup "Gundeck integration tests" [ ----------------------------------------------------------------------------- -- Push -addUser :: TestSignature (UserId, ConnId) -addUser gu ca _ _ = registerUser gu ca +addUser :: TestM (UserId, ConnId) +addUser = registerUser -removeUser :: TestSignature () -removeUser g c _ s = do - user <- fst <$> registerUser g c +removeUser :: TestM () +removeUser = do + g <- view tsGundeck + s <- view tsCass + logger <- view tsLogger + user <- fst <$> registerUser clt <- randomClientId tok <- randomToken clt gcmToken - _ <- registerPushToken user tok g - _ <- sendPush g (buildPush user [(user, RecipientClientsAll)] (textPayload "data")) + _ <- registerPushToken user tok + _ <- sendPush (buildPush user [(user, RecipientClientsAll)] (textPayload "data")) deleteUser g user - ntfs <- listNotifications user Nothing g + ntfs <- listNotifications user Nothing liftIO $ do - tokens <- Cql.runClient s (Push.lookup user Push.Quorum) + tokens <- Cql.runClient s (Log.runWithLogger logger $ Push.lookup user Push.Quorum) null tokens @?= True ntfs @?= [] -replacePresence :: TestSignature () -replacePresence gu ca _ _ = do +replacePresence :: TestM () +replacePresence = do + gu <- view tsGundeck + ca <- view tsCannon uid <- randomId con <- randomConnId let localhost8080 = URI . fromJust $ parseURI "http://localhost:8080" let localhost8081 = URI . fromJust $ parseURI "http://localhost:8081" let pres1 = Presence uid (ConnId "dummy_dev") localhost8080 Nothing 0 "" let pres2 = Presence uid (ConnId "dummy_dev") localhost8081 Nothing 0 "" - void $ connectUser gu ca uid con + void $ connectUser ca uid con setPresence gu pres1 !!! const 201 === statusCode - sendPush gu (push uid [uid]) + sendPush (push uid [uid]) getPresence gu (showUser uid) !!! do const 2 === length . decodePresence assertTrue "Cannon is not removed" $ @@ -167,28 +147,30 @@ replacePresence gu ca _ _ = do pload = List1.singleton $ HashMap.fromList [ "foo" .= (42 :: Int) ] push u us = newPush u (toRecipients us) pload & pushOriginConnection .~ Just (ConnId "dev") -removeStalePresence :: TestSignature () -removeStalePresence gu ca _ _ = do +removeStalePresence :: TestM () +removeStalePresence = do + ca <- view tsCannon uid <- randomId con <- randomConnId - void $ connectUser gu ca uid con - ensurePresent gu uid 1 - sendPush gu (push uid [uid]) + void $ connectUser ca uid con + ensurePresent uid 1 + sendPush (push uid [uid]) m <- liftIO newEmptyMVar w <- wsRun ca uid con (wsCloser m) - wsAssertPresences gu uid 1 + wsAssertPresences uid 1 liftIO $ void $ putMVar m () >> wait w - sendPush gu (push uid [uid]) - ensurePresent gu uid 0 + sendPush (push uid [uid]) + ensurePresent uid 0 where pload = List1.singleton $ HashMap.fromList [ "foo" .= (42 :: Int) ] push u us = newPush u (toRecipients us) pload & pushOriginConnection .~ Just (ConnId "dev") -singleUserPush :: TestSignature () -singleUserPush gu ca _ _ = do +singleUserPush :: TestM () +singleUserPush = do + ca <- view tsCannon uid <- randomId - ch <- connectUser gu ca uid =<< randomConnId - sendPush gu (push uid [uid]) + ch <- connectUser ca uid =<< randomConnId + sendPush (push uid [uid]) liftIO $ do msg <- waitForMessage ch assertBool "No push message received" (isJust msg) @@ -204,8 +186,10 @@ singleUserPush gu ca _ _ = do -- notifications from server (@isE2E == False@) to all connections, and make sure they all arrive at -- the destination devices. This also works if you pass the same 'Cannon' twice, even if 'Cannon' -- is a k8s load balancer that dispatches requests to different replicas. -bulkPush :: Bool -> Int -> Int -> TestSignature2 () -bulkPush isE2E numUsers numConnsPerUser gu ca ca2 _ _ = do +bulkPush :: Bool -> Int -> Int -> TestM () +bulkPush isE2E numUsers numConnsPerUser = do + ca <- view tsCannon + ca2 <- view tsCannon uids@(uid:_) :: [UserId] <- replicateM numUsers randomId (connids@((_:_):_)) :: [[ConnId]] <- replicateM numUsers $ replicateM numConnsPerUser randomConnId let ucs :: [(UserId, [ConnId])] = zip uids connids @@ -214,17 +198,17 @@ bulkPush isE2E numUsers numConnsPerUser gu ca ca2 _ _ = do chs <- do let (ucs1, ucs2) = splitAt (fromIntegral (length ucs `div` 2)) ucs (ucs1', ucs2') = splitAt (fromIntegral (length ucs `div` 2)) ucs' - chs1 <- injectucs ca ucs1' . fmap snd <$> connectUsersAndDevices gu ca ucs1 - chs2 <- injectucs ca2 ucs2' . fmap snd <$> connectUsersAndDevices gu ca2 ucs2 + chs1 <- injectucs ca ucs1' . fmap snd <$> connectUsersAndDevices ca ucs1 + chs2 <- injectucs ca2 ucs2' . fmap snd <$> connectUsersAndDevices ca2 ucs2 pure $ chs1 ++ chs2 let pushData = mconcat . replicate 3 $ (if isE2E then pushE2E else pushGroup) uid ucs' - sendPushes gu pushData + sendPushes pushData liftIO $ forConcurrently_ chs $ replicateM 3 . checkMsg where -- associate chans with userid, connid. - injectucs :: Cannon -> [(UserId, [(ConnId, Bool)])] -> [[TChan ByteString]] - -> [(Cannon, UserId, ((ConnId, Bool), TChan ByteString))] + injectucs :: CannonR -> [(UserId, [(ConnId, Bool)])] -> [[TChan ByteString]] + -> [(CannonR, UserId, ((ConnId, Bool), TChan ByteString))] injectucs ca_ ucs chs = mconcat $ zipWith (\(uid, connids) chs_ -> (ca_, uid,) <$> zip connids chs_) ucs chs -- will a notification actually be sent? @@ -271,12 +255,13 @@ bulkPush isE2E numUsers numConnsPerUser gu ca ca2 _ _ = do else do assertBool "Unexpected push message received" (isNothing msg) -sendSingleUserNoPiggyback :: TestSignature () -sendSingleUserNoPiggyback gu ca _ _ = do +sendSingleUserNoPiggyback :: TestM () +sendSingleUserNoPiggyback = do + ca <- view tsCannon uid <- randomId did <- randomConnId - ch <- connectUser gu ca uid did - sendPush gu (push uid [uid] did) + ch <- connectUser ca uid did + sendPush (push uid [uid] did) liftIO $ do msg <- waitForMessage ch assertBool "Push message received" (isNothing msg) @@ -284,17 +269,18 @@ sendSingleUserNoPiggyback gu ca _ _ = do pload = List1.singleton $ HashMap.fromList [ "foo" .= (42 :: Int) ] push u us d = newPush u (toRecipients us) pload & pushOriginConnection .~ Just d -sendMultipleUsers :: TestSignature () -sendMultipleUsers gu ca _ _ = do +sendMultipleUsers :: TestM () +sendMultipleUsers = do + ca <- view tsCannon uid1 <- randomId -- offline and no native push uid2 <- randomId -- online uid3 <- randomId -- offline and native push clt <- randomClientId tok <- randomToken clt gcmToken - _ <- registerPushToken uid3 tok gu + _ <- registerPushToken uid3 tok - ws <- connectUser gu ca uid2 =<< randomConnId - sendPush gu (push uid1 [uid1, uid2, uid3]) + ws <- connectUser ca uid2 =<< randomConnId + sendPush (push uid1 [uid1, uid2, uid3]) -- 'uid2' should get the push over the websocket liftIO $ do msg <- waitForMessage ws @@ -306,11 +292,11 @@ sendMultipleUsers gu ca _ _ = do -- We should get a 'DeliveryFailure' and / or 'EndpointUpdated' -- via SQS and thus remove the token. liftIO $ putStrLn "Waiting for SQS feedback to remove the token (~60-90s) ..." - void $ retryWhileN 90 (not . null) (listPushTokens uid3 gu) + void $ retryWhileN 90 (not . null) (listPushTokens uid3) -- 'uid1' and 'uid2' should each have 1 notification - ntfs1 <- listNotifications uid1 Nothing gu - ntfs2 <- listNotifications uid2 Nothing gu + ntfs1 <- listNotifications uid1 Nothing + ntfs2 <- listNotifications uid2 Nothing liftIO $ forM_ [ntfs1, ntfs2] $ \ntfs -> do assertEqual "Not exactly 1 notification" 1 (length ntfs) let p = view queuedNotificationPayload (Prelude.head ntfs) @@ -318,7 +304,7 @@ sendMultipleUsers gu ca _ _ = do -- 'uid3' should have two notifications, one for the message and one -- for the removed token. - ntfs3 <- listNotifications uid3 Nothing gu + ntfs3 <- listNotifications uid3 Nothing liftIO $ do assertBool "Not at least 2 notifications" (length ntfs3 >= 2) let (n1,nx) = checkNotifications ntfs3 @@ -337,13 +323,14 @@ sendMultipleUsers gu ca _ _ = do pevent = HashMap.fromList [ "foo" .= (42 :: Int) ] push u us = newPush u (toRecipients us) pload & pushOriginConnection .~ Just (ConnId "dev") -targetConnectionPush :: TestSignature () -targetConnectionPush gu ca _ _ = do +targetConnectionPush :: TestM () +targetConnectionPush = do + ca <- view tsCannon uid <- randomId conn1 <- randomConnId - c1 <- connectUser gu ca uid conn1 - c2 <- connectUser gu ca uid =<< randomConnId - sendPush gu (push uid conn1) + c1 <- connectUser ca uid conn1 + c2 <- connectUser ca uid =<< randomConnId + sendPush (push uid conn1) liftIO $ do e1 <- waitForMessage c1 e2 <- waitForMessage c2 @@ -353,32 +340,33 @@ targetConnectionPush gu ca _ _ = do pload = List1.singleton $ HashMap.fromList [ "foo" .= (42 :: Int) ] push u t = newPush u (toRecipients [u]) pload & pushConnections .~ Set.singleton t -targetClientPush :: TestSignature () -targetClientPush gu ca _ _ = do +targetClientPush :: TestM () +targetClientPush = do + ca <- view tsCannon uid <- randomId cid1 <- randomClientId cid2 <- randomClientId - let ca1 = Cannon (runCannon ca . queryItem "client" (toByteString' cid1)) - let ca2 = Cannon (runCannon ca . queryItem "client" (toByteString' cid2)) - c1 <- connectUser gu ca1 uid =<< randomConnId - c2 <- connectUser gu ca2 uid =<< randomConnId + let ca1 = CannonR (runCannonR ca . queryItem "client" (toByteString' cid1)) + let ca2 = CannonR (runCannonR ca . queryItem "client" (toByteString' cid2)) + c1 <- connectUser ca1 uid =<< randomConnId + c2 <- connectUser ca2 uid =<< randomConnId -- Push only to the first client - sendPush gu (push uid cid1) + sendPush (push uid cid1) liftIO $ do e1 <- waitForMessage c1 e2 <- waitForMessage c2 assertBool "No push message received" (isJust e1) assertBool "Unexpected push message received" (isNothing e2) -- Push only to the second client - sendPush gu (push uid cid2) + sendPush (push uid cid2) liftIO $ do e1 <- waitForMessage c1 e2 <- waitForMessage c2 assertBool "Unexpected push message received" (isNothing e1) assertBool "No push message received" (isJust e2) -- Check the notification stream - ns1 <- listNotifications uid (Just cid1) gu - ns2 <- listNotifications uid (Just cid2) gu + ns1 <- listNotifications uid (Just cid1) + ns2 <- listNotifications uid (Just cid2) liftIO $ forM_ [(ns1, cid1), (ns2, cid2)] $ \(ns, c) -> do assertEqual "Not exactly 1 notification" 1 (length ns) let p = view queuedNotificationPayload (Prelude.head ns) @@ -393,30 +381,31 @@ targetClientPush gu ca _ _ = do ----------------------------------------------------------------------------- -- Notifications -testNoNotifs :: TestSignature () -testNoNotifs gu _ _ _ = do +testNoNotifs :: TestM () +testNoNotifs = do ally <- randomId - ns <- listNotifications ally Nothing gu + ns <- listNotifications ally Nothing liftIO $ assertEqual "Unexpected notifications" 0 (length ns) -testFetchAllNotifs :: TestSignature () -testFetchAllNotifs gu _ _ _ = do +testFetchAllNotifs :: TestM () +testFetchAllNotifs = do ally <- randomId let pload = textPayload "hello" - replicateM_ 10 (sendPush gu (buildPush ally [(ally, RecipientClientsAll)] pload)) - ns <- listNotifications ally Nothing gu + replicateM_ 10 (sendPush (buildPush ally [(ally, RecipientClientsAll)] pload)) + ns <- listNotifications ally Nothing liftIO $ assertEqual "Unexpected notification count" 10 (length ns) liftIO $ assertEqual "Unexpected notification payloads" (replicate 10 pload) (map (view queuedNotificationPayload) ns) -testFetchNewNotifs :: TestSignature () -testFetchNewNotifs gu _ _ _ = do +testFetchNewNotifs :: TestM () +testFetchNewNotifs = do + gu <- view tsGundeck ally <- randomId let pload = textPayload "hello" - replicateM_ 4 (sendPush gu (buildPush ally [(ally, RecipientClientsAll)] pload)) - ns <- map (view queuedNotificationId) <$> listNotifications ally Nothing gu - get ( runGundeck gu + replicateM_ 4 (sendPush (buildPush ally [(ally, RecipientClientsAll)] pload)) + ns <- map (view queuedNotificationId) <$> listNotifications ally Nothing + get ( runGundeckR gu . zUser ally . path "notifications" . query [("since", Just (toByteString' (ns !! 1)))] @@ -424,12 +413,13 @@ testFetchNewNotifs gu _ _ _ = do const 200 === statusCode const (Just $ drop 2 ns) === parseNotificationIds -testNoNewNotifs :: TestSignature () -testNoNewNotifs gu _ _ _ = do +testNoNewNotifs :: TestM () +testNoNewNotifs = do + gu <- view tsGundeck ally <- randomId - sendPush gu (buildPush ally [(ally, RecipientClientsAll)] (textPayload "hello")) - (n:_) <- map (view queuedNotificationId) <$> listNotifications ally Nothing gu - get ( runGundeck gu + sendPush (buildPush ally [(ally, RecipientClientsAll)] (textPayload "hello")) + (n:_) <- map (view queuedNotificationId) <$> listNotifications ally Nothing + get ( runGundeckR gu . zUser ally . path "notifications" . query [("since", Just (toByteString' n))] @@ -437,64 +427,69 @@ testNoNewNotifs gu _ _ _ = do const 200 === statusCode const (Just []) === parseNotificationIds -testMissingNotifs :: TestSignature () -testMissingNotifs gu _ _ _ = do +testMissingNotifs :: TestM () +testMissingNotifs = do + gu <- view tsGundeck other <- randomId - sendPush gu (buildPush other [(other, RecipientClientsAll)] (textPayload "hello")) - (old:_) <- map (view queuedNotificationId) <$> listNotifications other Nothing gu + sendPush (buildPush other [(other, RecipientClientsAll)] (textPayload "hello")) + (old:_) <- map (view queuedNotificationId) <$> listNotifications other Nothing ally <- randomId - sendPush gu (buildPush ally [(ally, RecipientClientsAll)] (textPayload "hello")) - ns <- listNotifications ally Nothing gu - get ( runGundeck gu + sendPush (buildPush ally [(ally, RecipientClientsAll)] (textPayload "hello")) + ns <- listNotifications ally Nothing + get ( runGundeckR gu . zUser ally . path "notifications" . query [("since", Just (toByteString' old))]) !!! do const 404 === statusCode const (Just ns) === parseNotifications -testFetchLastNotif :: TestSignature () -testFetchLastNotif gu _ _ _ = do +testFetchLastNotif :: TestM () +testFetchLastNotif = do + gu <- view tsGundeck ally <- randomId - sendPush gu (buildPush ally [(ally, RecipientClientsAll)] (textPayload "first")) - sendPush gu (buildPush ally [(ally, RecipientClientsAll)] (textPayload "last")) - [_, n] <- listNotifications ally Nothing gu - get (runGundeck gu . zUser ally . paths ["notifications", "last"]) !!! do + sendPush (buildPush ally [(ally, RecipientClientsAll)] (textPayload "first")) + sendPush (buildPush ally [(ally, RecipientClientsAll)] (textPayload "last")) + [_, n] <- listNotifications ally Nothing + get (runGundeckR gu . zUser ally . paths ["notifications", "last"]) !!! do const 200 === statusCode const (Just n) === parseNotification -testNoLastNotif :: TestSignature () -testNoLastNotif gu _ _ _ = do +testNoLastNotif :: TestM () +testNoLastNotif = do + gu <- view tsGundeck ally <- randomId - get (runGundeck gu . zUser ally . paths ["notifications", "last"]) !!! do + get (runGundeckR gu . zUser ally . paths ["notifications", "last"]) !!! do const 404 === statusCode const (Just "not-found") =~= responseBody -testFetchNotifBadSince :: TestSignature () -testFetchNotifBadSince gu _ _ _ = do +testFetchNotifBadSince :: TestM () +testFetchNotifBadSince = do + gu <- view tsGundeck ally <- randomId - sendPush gu (buildPush ally [(ally, RecipientClientsAll)] (textPayload "first")) - ns <- listNotifications ally Nothing gu - get ( runGundeck gu + sendPush (buildPush ally [(ally, RecipientClientsAll)] (textPayload "first")) + ns <- listNotifications ally Nothing + get ( runGundeckR gu . zUser ally . path "notifications" . query [("since", Just "jumberjack")] ) !!! do const 404 === statusCode const (Just ns) === parseNotifications -testFetchNotifById :: TestSignature () -testFetchNotifById gu _ _ _ = do +testFetchNotifById :: TestM () +testFetchNotifById = do + gu <- view tsGundeck ally <- randomId c1 <- randomClientId c2 <- randomClientId - sendPush gu (buildPush ally [(ally, RecipientClientsSome (List1.singleton c1))] + sendPush (buildPush ally [(ally, RecipientClientsSome (List1.singleton c1))] (textPayload "first")) - sendPush gu (buildPush ally [(ally, RecipientClientsSome (List1.singleton c2))] + sendPush (buildPush ally [(ally, RecipientClientsSome (List1.singleton c2))] (textPayload "second")) - [n1, n2] <- listNotifications ally Nothing gu + [n1, n2] <- listNotifications ally Nothing forM_ [(n1, c1), (n2, c2)] $ \(n, c) -> let nid = toByteString' (view queuedNotificationId n) cid = toByteString' c - in get ( runGundeck gu + in get ( runGundeckR gu . zUser ally . paths ["notifications", nid] . queryItem "client" cid @@ -502,77 +497,77 @@ testFetchNotifById gu _ _ _ = do const 200 === statusCode const (Just n) === parseNotification -testFilterNotifByClient :: TestSignature () -testFilterNotifByClient gu _ _ _ = do +testFilterNotifByClient :: TestM () +testFilterNotifByClient = do alice <- randomId clt1 <- randomClientId clt2 <- randomClientId clt3 <- randomClientId -- Add a notification for client 1 - sendPush gu (buildPush alice [(alice, RecipientClientsSome (List1.singleton clt1))] + sendPush (buildPush alice [(alice, RecipientClientsSome (List1.singleton clt1))] (textPayload "first")) - [n] <- listNotifications alice (Just clt1) gu + [n] <- listNotifications alice (Just clt1) -- get all for the first client - getNotifications gu alice (Just clt1) !!! do + getNotifications alice (Just clt1) !!! do const 200 === statusCode const (Just [n]) === parseNotifications -- get all for the second client - getNotifications gu alice (Just clt2) !!! do + getNotifications alice (Just clt2) !!! do const 200 === statusCode const (Just []) === parseNotifications -- get all for all clients - getNotifications gu alice Nothing !!! do + getNotifications alice Nothing !!! do const 200 === statusCode const (Just [n]) === parseNotifications -- Add another notification for client 3 - sendPush gu (buildPush alice [(alice, RecipientClientsSome (List1.singleton clt3))] + sendPush (buildPush alice [(alice, RecipientClientsSome (List1.singleton clt3))] (textPayload "last")) - [n'] <- listNotifications alice (Just clt3) gu + [n'] <- listNotifications alice (Just clt3) -- get last for the first client - getLastNotification gu alice (Just clt1) !!! do + getLastNotification alice (Just clt1) !!! do const 200 === statusCode const (Just n) === parseNotification -- get last for a second client - getLastNotification gu alice (Just clt2) !!! do + getLastNotification alice (Just clt2) !!! do const 404 === statusCode const (Just "not-found") =~= responseBody -- get last for a third client - getLastNotification gu alice (Just clt3) !!! do + getLastNotification alice (Just clt3) !!! do const 200 === statusCode const (Just n') === parseNotification -- get last for any client - getLastNotification gu alice Nothing !!! do + getLastNotification alice Nothing !!! do const 200 === statusCode const (Just n') === parseNotification -- Add a lot of notifications for client 3 - replicateM_ 101 $ sendPush gu + replicateM_ 101 $ sendPush (buildPush alice [(alice, RecipientClientsSome (List1.singleton clt3))] (textPayload "final")) - ns <- listNotifications alice (Just clt3) gu + ns <- listNotifications alice (Just clt3) liftIO $ assertBool "notification count" (length ns == 102) -- last for the first client still unchanged - getLastNotification gu alice (Just clt1) !!! do + getLastNotification alice (Just clt1) !!! do const 200 === statusCode const (Just n) === parseNotification -- still no notification for the second client - getLastNotification gu alice (Just clt2) !!! do + getLastNotification alice (Just clt2) !!! do const 404 === statusCode const (Just "not-found") =~= responseBody -- last for the third client updated - getLastNotification gu alice (Just clt3) !!! do + getLastNotification alice (Just clt3) !!! do const 200 === statusCode const (Just (last ns)) === parseNotification -testNotificationPaging :: TestSignature () -testNotificationPaging gu _ _ _ = do +testNotificationPaging :: TestM () +testNotificationPaging = do -- Without client ID u1 <- randomId replicateM_ 399 (insert u1 RecipientClientsAll) @@ -606,7 +601,7 @@ testNotificationPaging gu _ _ _ = do paging u3 (Just c1) 110 110 [110, 0] paging u3 (Just c2) 20 100 [20, 0] where - insert u c = sendPush gu (buildPush u [(u, c)] (textPayload "data")) + insert u c = sendPush (buildPush u [(u, c)] (textPayload "data")) paging u c total step = foldM_ (next u c (total, step)) (0, Nothing) @@ -615,12 +610,13 @@ testNotificationPaging gu _ _ _ = do -> (Int, Int) -> (Int, Maybe NotificationId) -> Int - -> Http (Int, Maybe NotificationId) + -> TestM (Int, Maybe NotificationId) next u c (total, step) (count, start) pageSize = do + gu <- view tsGundeck let range = maybe id (queryItem "client" . toByteString') c . maybe id (queryItem "since" . toByteString') start . queryItem "size" (toByteString' step) - r <- get (runGundeck gu . path "/notifications" . zUser u . range) listPushTokens uid g + _tokens <- sortPushTokens <$> listPushTokens uid let _expected = sortPushTokens [t11, t12, t13, t21, t22] liftIO $ assertEqual "unexpected tokens" _expected _tokens -- Register overlapping tokens. The previous overlapped -- tokens should be removed, but none of the others. - _ <- registerPushToken uid t11' g - _ <- registerPushToken uid t22' g + _ <- registerPushToken uid t11' + _ <- registerPushToken uid t22' -- Check tokens - _tokens <- sortPushTokens <$> listPushTokens uid g + _tokens <- sortPushTokens <$> listPushTokens uid let _expected = sortPushTokens [t11', t12, t13, t21, t22'] liftIO $ assertEqual "unexpected tokens" _expected _tokens @@ -686,12 +684,12 @@ testRegisterPushToken g _ b _ = do unregisterClient g uid c1 !!! const 200 === statusCode -- (deleting a non-existing token is ok.) unregisterClient g uid c2 !!! const 200 === statusCode unregisterClient g uid c2 !!! const 200 === statusCode -- (deleting a non-existing token is ok.) - _tokens <- listPushTokens uid g + _tokens <- listPushTokens uid liftIO $ assertEqual "unexpected tokens" [] _tokens -- TODO: Try to make this test more performant, this test takes too long right now -testRegisterTooManyTokens :: TestSignature () -testRegisterTooManyTokens g _ _ _ = do +testRegisterTooManyTokens :: TestM () +testRegisterTooManyTokens = do -- create tokens for reuse with multiple users gcmTok <- Token . T.decodeUtf8 . toByteString' <$> randomId uids <- liftIO $ replicateM 55 randomId @@ -703,57 +701,59 @@ testRegisterTooManyTokens g _ _ _ = do registerToken status gcmTok uid = do con <- randomClientId let tkg = pushToken GCM "test" gcmTok con - registerPushTokenRequest uid tkg g !!! const status === statusCode + registerPushTokenRequest uid tkg !!! const status === statusCode -testUnregisterPushToken :: TestSignature () -testUnregisterPushToken g _ b _ = do - uid <- randomUser b +testUnregisterPushToken :: TestM () +testUnregisterPushToken = do + uid <- randomUser clt <- randomClientId tkn <- randomToken clt gcmToken - void $ registerPushToken uid tkn g - void $ retryWhileN 12 null (listPushTokens uid g) - unregisterPushToken uid (tkn^.token) g !!! const 204 === statusCode - void $ retryWhileN 12 (not . null) (listPushTokens uid g) - unregisterPushToken uid (tkn^.token) g !!! const 404 === statusCode - -testPingPong :: TestSignature () -testPingPong gu ca _ _ = do + void $ registerPushToken uid tkn + void $ retryWhileN 12 null (listPushTokens uid) + unregisterPushToken uid (tkn^.token) !!! const 204 === statusCode + void $ retryWhileN 12 (not . null) (listPushTokens uid) + unregisterPushToken uid (tkn^.token) !!! const 404 === statusCode + +testPingPong :: TestM () +testPingPong = do + ca <- view tsCannon uid :: UserId <- randomId connid :: ConnId <- randomConnId [(_, [(chread, chwrite)] :: [(TChan ByteString, TChan ByteString)])] - <- connectUsersAndDevicesWithSendingClients gu ca [(uid, [connid])] + <- connectUsersAndDevicesWithSendingClients ca [(uid, [connid])] liftIO $ do atomically $ writeTChan chwrite "ping" msg <- waitForMessage chread assertBool "no pong" $ msg == Just "pong" -testNoPingNoPong :: TestSignature () -testNoPingNoPong gu ca _ _ = do +testNoPingNoPong :: TestM () +testNoPingNoPong = do + ca <- view tsCannon uid :: UserId <- randomId connid :: ConnId <- randomConnId [(_, [(chread, chwrite)] :: [(TChan ByteString, TChan ByteString)])] - <- connectUsersAndDevicesWithSendingClients gu ca [(uid, [connid])] + <- connectUsersAndDevicesWithSendingClients ca [(uid, [connid])] liftIO $ do atomically $ writeTChan chwrite "Wire is so much nicer with internet!" msg <- waitForMessage chread assertBool "unexpected response on non-ping" $ isNothing msg -testSharePushToken :: TestSignature () -testSharePushToken g _ b _ = do +testSharePushToken :: TestM () +testSharePushToken = do gcmTok <- Token . T.decodeUtf8 . toByteString' <$> randomId apsTok <- Token . T.decodeUtf8 . B16.encode <$> randomBytes 32 let tok1 = pushToken GCM "test" gcmTok let tok2 = pushToken APNSVoIP "com.wire.dev.ent" apsTok let tok3 = pushToken APNS "com.wire.int.ent" apsTok forM_ [tok1, tok2, tok3] $ \tk -> do - u1 <- randomUser b - u2 <- randomUser b + u1 <- randomUser + u2 <- randomUser c1 <- randomClientId c2 <- randomClientId let t1 = tk c1 let t2 = tk c2 - t1' <- registerPushToken u1 t1 g - t2' <- registerPushToken u2 t2 g -- share the token with u1 + t1' <- registerPushToken u1 t1 + t2' <- registerPushToken u2 t2 -- share the token with u1 -- ^ Unfortunately this fails locally :( -- "Duplicate endpoint token: 61d22005-af6e-4199-add9-899aae79c70a" -- Instead of getting something in the lines of @@ -761,17 +761,17 @@ testSharePushToken g _ b _ = do liftIO $ assertEqual "token mismatch" (t1^.token) t1' liftIO $ assertEqual "token mismatch" (t2^.token) t2' liftIO $ assertEqual "token mismatch" t1' t2' - ts1 <- retryWhile ((/= 1) . length) (listPushTokens u1 g) - ts2 <- retryWhile ((/= 1) . length) (listPushTokens u2 g) + ts1 <- retryWhile ((/= 1) . length) (listPushTokens u1) + ts2 <- retryWhile ((/= 1) . length) (listPushTokens u2) liftIO $ assertEqual "token mismatch" [t1] ts1 liftIO $ assertEqual "token mismatch" [t2] ts2 - unregisterPushToken u1 t1' g !!! const 204 === statusCode - unregisterPushToken u2 t2' g !!! const 204 === statusCode + unregisterPushToken u1 t1' !!! const 204 === statusCode + unregisterPushToken u2 t2' !!! const 204 === statusCode -testReplaceSharedPushToken :: TestSignature () -testReplaceSharedPushToken g _ b _ = do - u1 <- randomUser b - u2 <- randomUser b +testReplaceSharedPushToken :: TestM () +testReplaceSharedPushToken = do + u1 <- randomUser + u2 <- randomUser c1 <- randomClientId c2 <- randomClientId @@ -779,101 +779,104 @@ testReplaceSharedPushToken g _ b _ = do t1 <- Token . T.decodeUtf8 . toByteString' <$> randomId let pt1 = pushToken GCM "test" t1 c1 let pt2 = pt1 & tokenClient .~ c2 -- share the token - _ <- registerPushToken u1 pt1 g - _ <- registerPushToken u2 pt2 g + _ <- registerPushToken u1 pt1 + _ <- registerPushToken u2 pt2 -- Update the shared token t2 <- Token . T.decodeUtf8 . toByteString' <$> randomId let new = pushToken GCM "test" t2 c1 - _ <- registerPushToken u1 new g + _ <- registerPushToken u1 new -- Check both tokens - ts1 <- map (view token) <$> listPushTokens u1 g - ts2 <- map (view token) <$> listPushTokens u2 g + ts1 <- map (view token) <$> listPushTokens u1 + ts2 <- map (view token) <$> listPushTokens u2 liftIO $ do [t2] @=? ts1 [t2] @=? ts2 -testLongPushToken :: TestSignature () -testLongPushToken g _ b _ = do - uid <- randomUser b +testLongPushToken :: TestM () +testLongPushToken = do + uid <- randomUser clt <- randomClientId -- normal size APNS token should succeed tkn1 <- randomToken clt apnsToken - registerPushTokenRequest uid tkn1 g !!! const 201 === statusCode + registerPushTokenRequest uid tkn1 !!! const 201 === statusCode -- APNS token over 400 bytes should fail (actual token sizes are twice the tSize) tkn2 <- randomToken clt apnsToken{tSize=256} - registerPushTokenRequest uid tkn2 g !!! const 413 === statusCode + registerPushTokenRequest uid tkn2 !!! const 413 === statusCode -- normal size GCM token should succeed tkn3 <- randomToken clt gcmToken - registerPushTokenRequest uid tkn3 g !!! const 201 === statusCode + registerPushTokenRequest uid tkn3 !!! const 201 === statusCode -- GCM token over 8192 bytes should fail (actual token sizes are twice the tSize) tkn4 <- randomToken clt gcmToken{tSize=5000} - registerPushTokenRequest uid tkn4 g !!! const 413 === statusCode + registerPushTokenRequest uid tkn4 !!! const 413 === statusCode -- * Helpers -registerUser :: HasCallStack => Gundeck -> Cannon -> Http (UserId, ConnId) -registerUser gu ca = do +registerUser :: HasCallStack => TestM (UserId, ConnId) +registerUser = do + ca <- view tsCannon uid <- randomId con <- randomConnId - void $ connectUser gu ca uid con - ensurePresent gu uid 1 + void $ connectUser ca uid con + ensurePresent uid 1 return (uid, con) -ensurePresent :: HasCallStack => Gundeck -> UserId -> Int -> Http () -ensurePresent gu u n = +ensurePresent :: HasCallStack => UserId -> Int -> TestM () +ensurePresent u n = do + gu <- view tsGundeck retryWhile ((n /=) . length . decodePresence) (getPresence gu (showUser u)) !!! (const n === length . decodePresence) -connectUser :: HasCallStack => Gundeck -> Cannon -> UserId -> ConnId -> Http (TChan ByteString) -connectUser gu ca uid con = do - [(_, [ch])] <- connectUsersAndDevices gu ca [(uid, [con])] +connectUser :: HasCallStack => CannonR -> UserId -> ConnId -> TestM (TChan ByteString) +connectUser ca uid con = do + [(_, [ch])] <- connectUsersAndDevices ca [(uid, [con])] return ch connectUsersAndDevices :: HasCallStack - => Gundeck -> Cannon -> [(UserId, [ConnId])] - -> Http [(UserId, [TChan ByteString])] -connectUsersAndDevices gu ca uidsAndConnIds = - strip <$> connectUsersAndDevicesWithSendingClients gu ca uidsAndConnIds + => CannonR -> [(UserId, [ConnId])] + -> TestM [(UserId, [TChan ByteString])] +connectUsersAndDevices ca uidsAndConnIds = do + strip <$> connectUsersAndDevicesWithSendingClients ca uidsAndConnIds where strip = fmap (_2 %~ fmap fst) connectUsersAndDevicesWithSendingClients :: HasCallStack - => Gundeck -> Cannon -> [(UserId, [ConnId])] - -> Http [(UserId, [(TChan ByteString, TChan ByteString)])] -connectUsersAndDevicesWithSendingClients gu ca uidsAndConnIds = do + => CannonR -> [(UserId, [ConnId])] + -> TestM [(UserId, [(TChan ByteString, TChan ByteString)])] +connectUsersAndDevicesWithSendingClients ca uidsAndConnIds = do chs <- forM uidsAndConnIds $ \(uid, conns) -> (uid,) <$> do forM conns $ \conn -> do chread <- liftIO $ atomically newTChan chwrite <- liftIO $ atomically newTChan _ <- wsRun ca uid conn (wsReaderWriter chread chwrite) pure (chread, chwrite) - (\(uid, conns) -> wsAssertPresences gu uid (length conns)) `mapM_` uidsAndConnIds + (\(uid, conns) -> wsAssertPresences uid (length conns)) `mapM_` uidsAndConnIds pure chs -- | Sort 'PushToken's based on the actual 'token' values. sortPushTokens :: [PushToken] -> [PushToken] sortPushTokens = sortBy (compare `on` view token) -wsRun :: HasCallStack => Cannon -> UserId -> ConnId -> WS.ClientApp () -> Http (Async ()) +wsRun :: HasCallStack => CannonR -> UserId -> ConnId -> WS.ClientApp () -> TestM (Async ()) wsRun ca uid (ConnId con) app = do liftIO $ async $ WS.runClientWith caHost caPort caPath caOpts caHdrs app where - runCan = runCannon ca empty + runCan = runCannonR ca empty caHost = C.unpack $ Http.host runCan caPort = Http.port runCan caPath = "/await" ++ C.unpack (Http.queryString runCan) caOpts = WS.defaultConnectionOptions caHdrs = [ ("Z-User", showUser uid), ("Z-Connection", con) ] -wsAssertPresences :: HasCallStack => Gundeck -> UserId -> Int -> Http () -wsAssertPresences gu uid numPres = do +wsAssertPresences :: HasCallStack => UserId -> Int -> TestM () +wsAssertPresences uid numPres = do + gu <- view tsGundeck retryWhile ((numPres /=) . length . decodePresence) (getPresence gu $ showUser uid) !!! (const numPres === length . decodePresence) @@ -899,20 +902,21 @@ waitForMessage = waitForMessage' 1000000 waitForMessage' :: Int -> TChan ByteString -> IO (Maybe ByteString) waitForMessage' musecs = System.Timeout.timeout musecs . liftIO . atomically . readTChan -unregisterClient :: Gundeck -> UserId -> ClientId -> Http (Response (Maybe BL.ByteString)) -unregisterClient g uid cid = delete $ runGundeck g +unregisterClient :: GundeckR -> UserId -> ClientId -> TestM (Response (Maybe BL.ByteString)) +unregisterClient g uid cid = delete $ runGundeckR g . zUser uid . paths ["/i/clients", toByteString' cid] -registerPushToken :: UserId -> PushToken -> Gundeck -> Http Token -registerPushToken u t g = do - r <- registerPushTokenRequest u t g +registerPushToken :: UserId -> PushToken -> TestM Token +registerPushToken u t = do + r <- registerPushTokenRequest u t return $ Token (T.decodeUtf8 $ getHeader' "Location" r) -registerPushTokenRequest :: UserId -> PushToken -> Gundeck -> Http (Response (Maybe BL.ByteString)) -registerPushTokenRequest u t g = do +registerPushTokenRequest :: UserId -> PushToken -> TestM (Response (Maybe BL.ByteString)) +registerPushTokenRequest u t = do + g <- view tsGundeck let p = RequestBodyLBS (encode t) - post ( runGundeck g + post ( runGundeckR g . path "/push/tokens" . contentJson . zUser u @@ -920,10 +924,11 @@ registerPushTokenRequest u t g = do . body p ) -unregisterPushToken :: UserId -> Token -> Gundeck -> Http (Response (Maybe BL.ByteString)) -unregisterPushToken u t g = do +unregisterPushToken :: UserId -> Token -> TestM (Response (Maybe BL.ByteString)) +unregisterPushToken u t = do + g <- view tsGundeck let p = RequestBodyLBS (encode t) - delete ( runGundeck g + delete ( runGundeckR g . paths ["/push/tokens", toByteString' t] . contentJson . zUser u @@ -931,9 +936,10 @@ unregisterPushToken u t g = do . body p ) -listPushTokens :: UserId -> Gundeck -> Http [PushToken] -listPushTokens u g = do - rs <- get ( runGundeck g +listPushTokens :: UserId -> TestM [PushToken] +listPushTokens u = do + g <- view tsGundeck + rs <- get ( runGundeckR g . path "/push/tokens" . zUser u . zConn "random" @@ -942,33 +948,34 @@ listPushTokens u g = do (return . pushTokens) (responseBody rs >>= decode) -listNotifications :: HasCallStack => UserId -> Maybe ClientId -> Gundeck -> Http [QueuedNotification] -listNotifications u c g = do - rs <- getNotifications g u c UserId -> Maybe ClientId -> TestM [QueuedNotification] +listNotifications u c = do + rs <- getNotifications u c >= decode of Nothing -> error "Failed to decode notifications" Just ns -> maybe (error "No timestamp on notifications list") -- cf. #47 (const $ pure (view queuedNotifications ns)) (view queuedTime ns) -getNotifications :: Gundeck -> UserId -> Maybe ClientId -> Http (Response (Maybe BL.ByteString)) -getNotifications gu u c = get $ runGundeck gu +getNotifications :: UserId -> Maybe ClientId -> TestM (Response (Maybe BL.ByteString)) +getNotifications u c = view tsGundeck >>= \gu -> get $ runGundeckR gu . zUser u . path "notifications" . maybe id (queryItem "client" . toByteString') c -getLastNotification :: Gundeck -> UserId -> Maybe ClientId -> Http (Response (Maybe BL.ByteString)) -getLastNotification gu u c = get $ runGundeck gu +getLastNotification :: UserId -> Maybe ClientId -> TestM (Response (Maybe BL.ByteString)) +getLastNotification u c = view tsGundeck >>= \gu -> get $ runGundeckR gu . zUser u . paths ["notifications", "last"] . maybe id (queryItem "client" . toByteString') c -sendPush :: HasCallStack => Gundeck -> Push -> Http () -sendPush gu push = sendPushes gu [push] +sendPush :: HasCallStack => Push -> TestM () +sendPush push = sendPushes [push] -sendPushes :: HasCallStack => Gundeck -> [Push] -> Http () -sendPushes gu push = - post ( runGundeck gu . path "i/push/v2" . json push ) !!! const 200 === statusCode +sendPushes :: HasCallStack => [Push] -> TestM () +sendPushes push = do + gu <- view tsGundeck + post ( runGundeckR gu . path "i/push/v2" . json push ) !!! const 200 === statusCode buildPush :: HasCallStack @@ -1001,24 +1008,25 @@ zUser = header "Z-User" . toByteString' zConn :: ByteString -> Request -> Request zConn = header "Z-Connection" -getPresence :: Gundeck -> ByteString -> Http (Response (Maybe BL.ByteString)) -getPresence gu u = get (runGundeck gu . path ("/i/presences/" <> u)) +getPresence :: GundeckR -> ByteString -> TestM (Response (Maybe BL.ByteString)) +getPresence gu u = get (runGundeckR gu . path ("/i/presences/" <> u)) -setPresence :: Gundeck -> Presence -> Http (Response (Maybe BL.ByteString)) -setPresence gu dat = post (runGundeck gu . path "/i/presences" . json dat) +setPresence :: GundeckR -> Presence -> TestM (Response (Maybe BL.ByteString)) +setPresence gu dat = post (runGundeckR gu . path "/i/presences" . json dat) decodePresence :: Response (Maybe BL.ByteString) -> [Presence] decodePresence rs = fromMaybe (error "Failed to decode presences") $ responseBody rs >>= decode -randomUser :: Brig -> Http UserId -randomUser br = do +randomUser :: TestM UserId +randomUser = do + br <- view tsBrig e <- liftIO $ mkEmail "success" "simulator.amazonses.com" let p = object [ "name" .= e , "email" .= e , "password" .= ("secret" :: Text) ] - r <- post (runBrig br . path "/i/users" . json p) + r <- post (runBrigR br . path "/i/users" . json p) return . readNote "unable to parse Location header" . C.unpack $ getHeader' "Location" r @@ -1027,8 +1035,8 @@ randomUser br = do uid <- nextRandom return $ loc <> "+" <> UUID.toText uid <> "@" <> dom -deleteUser :: HasCallStack => Gundeck -> UserId -> Http () -deleteUser g uid = delete (runGundeck g . zUser uid . path "/i/user") !!! const 200 === statusCode +deleteUser :: HasCallStack => GundeckR -> UserId -> TestM () +deleteUser g uid = delete (runGundeckR g . zUser uid . path "/i/user") !!! const 200 === statusCode toRecipients :: [UserId] -> Range 1 1024 (Set Recipient) toRecipients = unsafeRange . Set.fromList . map (`recipient` RouteAny) diff --git a/services/gundeck/test/integration/Main.hs b/services/gundeck/test/integration/Main.hs index 4e5ca48c652..47a1da2af59 100644 --- a/services/gundeck/test/integration/Main.hs +++ b/services/gundeck/test/integration/Main.hs @@ -17,10 +17,10 @@ import OpenSSL (withOpenSSL) import Options.Applicative import Test.Tasty import Test.Tasty.Options -import Types import Util.Options import Util.Options.Common import Util.Test +import TestSetup import qualified API import qualified System.Logger as Logger @@ -67,6 +67,7 @@ main = withOpenSSL $ runTests go where go g i = withResource (getOpts g i) releaseOpts $ \opts -> API.tests opts + getOpts :: FilePath -> FilePath -> IO API.TestSetup getOpts gFile iFile = do m <- newManager tlsManagerSettings { managerResponseTimeout = responseTimeoutMicro 300000000 @@ -74,10 +75,10 @@ main = withOpenSSL $ runTests go let local p = Endpoint { _epHost = "127.0.0.1", _epPort = p } gConf <- handleParseError =<< decodeFileEither gFile iConf <- handleParseError =<< decodeFileEither iFile - g <- Gundeck . mkRequest <$> optOrEnv gundeck iConf (local . read) "GUNDECK_WEB_PORT" - c <- Cannon . mkRequest <$> optOrEnv cannon iConf (local . read) "CANNON_WEB_PORT" - c2 <- Cannon . mkRequest <$> optOrEnv cannon2 iConf (local . read) "CANNON2_WEB_PORT" - b <- Brig . mkRequest <$> optOrEnv brig iConf (local . read) "BRIG_WEB_PORT" + g <- GundeckR . mkRequest <$> optOrEnv gundeck iConf (local . read) "GUNDECK_WEB_PORT" + c <- CannonR . mkRequest <$> optOrEnv cannon iConf (local . read) "CANNON_WEB_PORT" + c2 <- CannonR . mkRequest <$> optOrEnv cannon2 iConf (local . read) "CANNON2_WEB_PORT" + b <- BrigR . mkRequest <$> optOrEnv brig iConf (local . read) "BRIG_WEB_PORT" ch <- optOrEnv (\v -> v^.optCassandra.casEndpoint.epHost) gConf pack "GUNDECK_CASSANDRA_HOST" cp <- optOrEnv (\v -> v^.optCassandra.casEndpoint.epPort) gConf read "GUNDECK_CASSANDRA_PORT" ck <- optOrEnv (\v -> v^.optCassandra.casKeyspace) gConf pack "GUNDECK_CASSANDRA_KEYSPACE" @@ -85,7 +86,7 @@ main = withOpenSSL $ runTests go lg <- Logger.new Logger.defSettings db <- defInitCassandra ck ch cp lg - return $ API.TestSetup m g c c2 b db + return $ API.TestSetup m g c c2 b db lg releaseOpts _ = return () diff --git a/services/gundeck/test/integration/TestSetup.hs b/services/gundeck/test/integration/TestSetup.hs new file mode 100644 index 00000000000..1bdfe284577 --- /dev/null +++ b/services/gundeck/test/integration/TestSetup.hs @@ -0,0 +1,67 @@ +{-# LANGUAGE GeneralizedNewtypeDeriving #-} +{-# OPTIONS_GHC -fprint-potential-instances #-} +module TestSetup + ( test + , tsManager + , tsGundeck + , tsCannon + , tsCannon2 + , tsBrig + , tsCass + , tsLogger + , TestM(..) + , TestSetup(..) + , BrigR(..) + , CannonR(..) + , GundeckR(..) + ) where + +import Imports +import Test.Tasty (TestName, TestTree) +import Test.Tasty.HUnit (Assertion, testCase) +import Control.Lens ((^.), makeLenses) +import Control.Monad.Catch (MonadCatch, MonadMask, MonadThrow) +import Bilge (HttpT(..), Manager, MonadHttp, Request, runHttpT) + +import qualified Cassandra as Cql +import qualified System.Logger as Log + +newtype TestM a = + TestM { runTestM :: ReaderT TestSetup (HttpT IO) a + } + deriving ( Functor + , Applicative + , Monad + , MonadReader TestSetup + , MonadIO + , MonadCatch + , MonadThrow + , MonadMask + , MonadHttp + , MonadUnliftIO + ) + +newtype BrigR = BrigR { runBrigR :: Request -> Request } +newtype CannonR = CannonR { runCannonR :: Request -> Request } +newtype GundeckR = GundeckR { runGundeckR :: Request -> Request } + +data TestSetup = TestSetup + { _tsManager :: Manager + , _tsGundeck :: GundeckR + , _tsCannon :: CannonR + , _tsCannon2 :: CannonR + , _tsBrig :: BrigR + , _tsCass :: Cql.ClientState + , _tsLogger :: Log.Logger + } + +makeLenses ''TestSetup + + +test :: IO TestSetup -> TestName -> TestM a -> TestTree +test s n h = testCase n runTest + where + runTest :: Assertion + runTest = do + setup <- s + void . runHttpT (setup ^. tsManager) . flip runReaderT setup . runTestM $ h diff --git a/services/gundeck/test/integration/Types.hs b/services/gundeck/test/integration/Types.hs deleted file mode 100644 index ca213f8c7db..00000000000 --- a/services/gundeck/test/integration/Types.hs +++ /dev/null @@ -1,7 +0,0 @@ -module Types where - -import Bilge (Request) - -newtype Brig = Brig { runBrig :: Request -> Request } -newtype Cannon = Cannon { runCannon :: Request -> Request } -newtype Gundeck = Gundeck { runGundeck :: Request -> Request } diff --git a/services/spar/src/Spar/Run.hs b/services/spar/src/Spar/Run.hs index f9c23f07217..c9e52356449 100644 --- a/services/spar/src/Spar/Run.hs +++ b/services/spar/src/Spar/Run.hs @@ -52,7 +52,8 @@ initCassandra opts lgr = do (Cas.initialContactsPlain (Types.cassandra opts ^. casEndpoint . epHost)) (Cas.initialContactsDisco "cassandra_spar") (cs <$> Types.discoUrl opts) - cas <- Cas.init (Log.clone (Just "cassandra.spar") lgr) $ Cas.defSettings + cas <- Cas.init $ Cas.defSettings + & Cas.setLogger (Cas.mkLogger (Log.clone (Just "cassandra.spar") lgr)) & Cas.setContacts (NE.head connectString) (NE.tail connectString) & Cas.setPortNumber (fromIntegral $ Types.cassandra opts ^. casEndpoint . epPort) & Cas.setKeyspace (Keyspace $ Types.cassandra opts ^. casKeyspace) diff --git a/snapshots/wire-1.2.yaml b/snapshots/wire-1.2.yaml new file mode 100644 index 00000000000..5d5a0d04ace --- /dev/null +++ b/snapshots/wire-1.2.yaml @@ -0,0 +1,6 @@ +resolver: wire-1.1.yaml +name: wire-1.2 + +packages: +- cql-io-1.1.0 # the MR in wire-1.0.yaml has been released on hackage. +- cql-io-tinylog-0.1.0 diff --git a/stack.yaml b/stack.yaml index ed9d6ba339b..d9cbb2f5e50 100644 --- a/stack.yaml +++ b/stack.yaml @@ -1,4 +1,4 @@ -resolver: snapshots/wire-1.1.yaml +resolver: snapshots/wire-1.2.yaml packages: - libs/api-bot diff --git a/tools/db/auto-whitelist/src/Main.hs b/tools/db/auto-whitelist/src/Main.hs index 818024726a8..eee470c86fb 100644 --- a/tools/db/auto-whitelist/src/Main.hs +++ b/tools/db/auto-whitelist/src/Main.hs @@ -32,7 +32,8 @@ main = do $ Log.defSettings initCas cas l - = C.init l + = C.init + . C.setLogger (C.mkLogger l) . C.setContacts (cas^.cHosts) [] . C.setPortNumber (fromIntegral $ cas^.cPort) . C.setKeyspace (cas^.cKeyspace) diff --git a/tools/db/service-backfill/src/Main.hs b/tools/db/service-backfill/src/Main.hs index fe899b99e8e..746df8bc1fc 100644 --- a/tools/db/service-backfill/src/Main.hs +++ b/tools/db/service-backfill/src/Main.hs @@ -33,7 +33,8 @@ main = do $ Log.defSettings initCas cas l - = C.init l + = C.init + . C.setLogger (C.mkLogger l) . C.setContacts (cas^.cHosts) [] . C.setPortNumber (fromIntegral $ cas^.cPort) . C.setKeyspace (cas^.cKeyspace) From 6752c3fde79aa59c9e0e5d00e3ada48814780e86 Mon Sep 17 00:00:00 2001 From: fisx Date: Thu, 21 Mar 2019 13:21:08 +0100 Subject: [PATCH 18/23] Bump saml2-web-sso dep. (#670) --- stack.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/stack.yaml b/stack.yaml index d9cbb2f5e50..ac42a625eb2 100644 --- a/stack.yaml +++ b/stack.yaml @@ -38,7 +38,7 @@ packages: extra-deps: - git: https://github.com/wireapp/saml2-web-sso - commit: c03d17d656ac467350c983d5f844c199e5daceea # master (Feb 21, 2019) + commit: e3aa52ac8637c168c122ad3e0eda02a7759dd56b # master (Mar 20, 2019) - git: https://github.com/wireapp/hscim commit: b2ddde040426d332a2eddcddb00e81ffb1144a90 # master (Mar 13, 2019) - git: https://gitlab.com/fisx/tinylog From 3398958e37594de8f077b0574916a20277821c25 Mon Sep 17 00:00:00 2001 From: Artyom Kazak Date: Fri, 22 Mar 2019 09:16:25 +0200 Subject: [PATCH 19/23] Remove some unused instances (#671) --- libs/bilge/src/Bilge/IO.hs | 2 +- libs/brig-types/src/Brig/Types/TURN.hs | 1 - libs/brig-types/src/Brig/Types/TURN/Internal.hs | 1 - libs/ropes/src/Ropes/Aws.hs | 4 ++-- libs/types-common/src/Data/Id.hs | 7 +------ libs/types-common/src/Util/Options.hs | 2 +- tools/bonanza/src/Bonanza/Streaming/Kibana.hs | 2 +- tools/bonanza/src/Bonanza/Types.hs | 2 +- 8 files changed, 7 insertions(+), 14 deletions(-) diff --git a/libs/bilge/src/Bilge/IO.hs b/libs/bilge/src/Bilge/IO.hs index 10ca0d4c010..67b820e9a55 100644 --- a/libs/bilge/src/Bilge/IO.hs +++ b/libs/bilge/src/Bilge/IO.hs @@ -62,7 +62,7 @@ import qualified Data.ByteString.Lazy as Lazy data Debug = Head -- ^ Print HTTP request/response header. | Full -- ^ Like 'Head' but also print the response body. - deriving (Eq, Ord, Show, Read, Enum) + deriving (Eq, Ord, Show, Enum) type Http a = HttpT IO a diff --git a/libs/brig-types/src/Brig/Types/TURN.hs b/libs/brig-types/src/Brig/Types/TURN.hs index 614809f6c06..f1464f5349e 100644 --- a/libs/brig-types/src/Brig/Types/TURN.hs +++ b/libs/brig-types/src/Brig/Types/TURN.hs @@ -1,4 +1,3 @@ -{-# LANGUAGE DeriveGeneric #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE StrictData #-} {-# LANGUAGE TemplateHaskell #-} diff --git a/libs/brig-types/src/Brig/Types/TURN/Internal.hs b/libs/brig-types/src/Brig/Types/TURN/Internal.hs index 7b652bb544b..3be06456f88 100644 --- a/libs/brig-types/src/Brig/Types/TURN/Internal.hs +++ b/libs/brig-types/src/Brig/Types/TURN/Internal.hs @@ -1,4 +1,3 @@ -{-# LANGUAGE DeriveGeneric #-} {-# LANGUAGE OverloadedStrings #-} module Brig.Types.TURN.Internal where diff --git a/libs/ropes/src/Ropes/Aws.hs b/libs/ropes/src/Ropes/Aws.hs index aa41549a5c9..d55f7c8a234 100644 --- a/libs/ropes/src/Ropes/Aws.hs +++ b/libs/ropes/src/Ropes/Aws.hs @@ -49,7 +49,7 @@ import qualified System.Logger as Logger newtype AccessKeyId = AccessKeyId { unKey :: ByteString } - deriving (Read, Eq, Show) + deriving (Eq, Show) instance FromJSON AccessKeyId where parseJSON = withText "Aws.AccessKeyId" $ @@ -57,7 +57,7 @@ instance FromJSON AccessKeyId where newtype SecretAccessKey = SecretAccessKey { unSecret :: ByteString } - deriving (Read, Eq) + deriving (Eq) instance Show SecretAccessKey where show _ = "AWS Secret hidden" diff --git a/libs/types-common/src/Data/Id.hs b/libs/types-common/src/Data/Id.hs index 97e1908f0e9..97bdd08a4d6 100644 --- a/libs/types-common/src/Data/Id.hs +++ b/libs/types-common/src/Data/Id.hs @@ -1,5 +1,4 @@ {-# LANGUAGE CPP #-} -{-# LANGUAGE DeriveGeneric #-} {-# LANGUAGE GeneralizedNewtypeDeriving #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE StandaloneDeriving #-} @@ -64,7 +63,7 @@ instance NFData NoId where rnf a = seq a () newtype Id a = Id { toUUID :: UUID - } deriving (Eq, Ord, Generic, NFData) + } deriving (Eq, Ord, NFData, Hashable) -- REFACTOR: non-derived, custom show instances break pretty-show and violate the law -- that @show . read == id@. can we derive Show here? @@ -84,8 +83,6 @@ instance FromByteString (Id a) where instance ToByteString (Id a) where builder = byteString . toASCIIBytes . toUUID -instance Hashable (Id a) - randomId :: (Functor m, MonadIO m) => m (Id a) randomId = Id <$> liftIO nextRandom @@ -145,7 +142,6 @@ instance Arbitrary (Id a) where newtype ConnId = ConnId { fromConnId :: ByteString } deriving ( Eq - , Generic , Ord , Read , Show @@ -213,7 +209,6 @@ newtype BotId = BotId { botUserId :: UserId } deriving ( Eq , Ord - , Generic , FromByteString , ToByteString , Hashable diff --git a/libs/types-common/src/Util/Options.hs b/libs/types-common/src/Util/Options.hs index 94ddae0e930..b42ee372ae0 100644 --- a/libs/types-common/src/Util/Options.hs +++ b/libs/types-common/src/Util/Options.hs @@ -72,7 +72,7 @@ deriveFromJSON toOptionFieldName ''CassandraOpts makeLenses ''CassandraOpts newtype FilePathSecrets = FilePathSecrets FilePath - deriving (Eq, Show, Read, FromJSON) + deriving (Eq, Show, FromJSON) loadSecret :: FromJSON a => FilePathSecrets -> IO (Either String a) loadSecret (FilePathSecrets p) = do diff --git a/tools/bonanza/src/Bonanza/Streaming/Kibana.hs b/tools/bonanza/src/Bonanza/Streaming/Kibana.hs index c77a5eca044..ac2b5e3cb61 100644 --- a/tools/bonanza/src/Bonanza/Streaming/Kibana.hs +++ b/tools/bonanza/src/Bonanza/Streaming/Kibana.hs @@ -40,7 +40,7 @@ data BulkAction , _type :: !Text , _id :: !(Maybe Text) } - deriving (Eq, Show, Generic) + deriving (Eq, Show) instance ToJSON BulkAction where toJSON Index{..} = diff --git a/tools/bonanza/src/Bonanza/Types.hs b/tools/bonanza/src/Bonanza/Types.hs index 281b5765039..330955c39fd 100644 --- a/tools/bonanza/src/Bonanza/Types.hs +++ b/tools/bonanza/src/Bonanza/Types.hs @@ -102,7 +102,7 @@ instance Monoid Tags where type TagValue = Aeson.Value newtype Host = Host { host :: Text } - deriving (Eq, Show, Generic) + deriving (Eq, Show) instance ToJSON Host where toJSON (Host h) = toJSON h From 0f083e1ca1bc0d515df98a9c2c1ca9b206eeffa9 Mon Sep 17 00:00:00 2001 From: Chris Penner Date: Wed, 20 Mar 2019 17:42:08 +0100 Subject: [PATCH 20/23] Reusable wai middleware for prometheus --- libs/metrics-wai/package.yaml | 7 +--- .../src/Data/Metrics/Middleware/Prometheus.hs | 37 +++++++++++++++++++ libs/metrics-wai/src/Data/Metrics/Types.hs | 4 +- 3 files changed, 42 insertions(+), 6 deletions(-) create mode 100644 libs/metrics-wai/src/Data/Metrics/Middleware/Prometheus.hs diff --git a/libs/metrics-wai/package.yaml b/libs/metrics-wai/package.yaml index a74eea59bad..239c8684c52 100644 --- a/libs/metrics-wai/package.yaml +++ b/libs/metrics-wai/package.yaml @@ -22,12 +22,9 @@ dependencies: - text >=0.11 - transformers >=0.3 - wai >=3 +- wai-middleware-prometheus - wai-route >=0.3 +- wai-routing library: source-dirs: src ghc-prof-options: -auto-all - exposed-modules: - - Data.Metrics.Middleware - - Data.Metrics.Types - - Data.Metrics.WaiRoute - - Data.Metrics.Servant diff --git a/libs/metrics-wai/src/Data/Metrics/Middleware/Prometheus.hs b/libs/metrics-wai/src/Data/Metrics/Middleware/Prometheus.hs new file mode 100644 index 00000000000..85af747620a --- /dev/null +++ b/libs/metrics-wai/src/Data/Metrics/Middleware/Prometheus.hs @@ -0,0 +1,37 @@ +module Data.Metrics.Middleware.Prometheus (waiPrometheusMiddleware) where + +import Imports +import qualified Network.Wai as Wai +import Network.Wai.Routing.Route (Routes, prepare) +import qualified Network.Wai.Middleware.Prometheus as Promth +import qualified Data.Text.Encoding as T + +import Data.Metrics.WaiRoute (treeToPaths) +import Data.Metrics.Types (treeLookup) + +-- | Adds a prometheus metrics endpoint at @/i/metrics@ +-- This middleware requires your servers 'Routes' because it does some normalization +-- (e.g. removing params from calls) +waiPrometheusMiddleware :: Monad m => Routes a m b -> Wai.Middleware +waiPrometheusMiddleware routes = + Promth.prometheus conf . Promth.instrumentHandlerValue (normalizeWaiRequestRoute routes) + where + conf = Promth.def + { Promth.prometheusEndPoint = ["i", "metrics"] + -- We provide our own instrumentation so we can normalize routes + , Promth.prometheusInstrumentApp = False + } + +-- | Compute a normalized route for a given request. +-- Normalized routes have route parameters replaced with their identifier +-- e.g. @/user/1234@ might become @/user/userid@ +normalizeWaiRequestRoute :: Monad m => Routes a m b -> Wai.Request -> Text +normalizeWaiRequestRoute routes req = pathInfo + where + mPathInfo :: Maybe ByteString + mPathInfo = treeLookup (treeToPaths $ prepare routes) (T.encodeUtf8 <$> Wai.pathInfo req) + + -- Use the normalized path info if available; otherwise dump the raw path info for + -- debugging purposes + pathInfo :: Text + pathInfo = T.decodeUtf8 $ fromMaybe (Wai.rawPathInfo req) mPathInfo diff --git a/libs/metrics-wai/src/Data/Metrics/Types.hs b/libs/metrics-wai/src/Data/Metrics/Types.hs index 34137a572c3..c03a36cdae0 100644 --- a/libs/metrics-wai/src/Data/Metrics/Types.hs +++ b/libs/metrics-wai/src/Data/Metrics/Types.hs @@ -17,6 +17,7 @@ import Data.Tree as Tree import qualified Data.ByteString.Char8 as BS +-- | The string used to represent the route within metrics e.g. the prometheus label newtype PathTemplate = PathTemplate Text -- | A 'Forest' of path segments. A path segment is 'Left' if it captures a value @@ -49,7 +50,8 @@ mkTree = fmap (Paths . melt) . mapM mkbranch . sortBy (flip compare) . fmap (fma else tree : melt (tree' : trees) -- | A variant of 'Network.Wai.Route.Tree.lookup'. The segments contain values to be captured --- when running the 'App', but here we simply replace them with @"<>"@. +-- when running the 'App', here we simply replace them with their identifier; +-- e.g. @/user/1234@ might become @/user/userid@ treeLookup :: Paths -> [ByteString] -> Maybe ByteString treeLookup (Paths forest) = go [] forest where From 97bd6e663228a296953e906e00b09c0a47626e6a Mon Sep 17 00:00:00 2001 From: Chris Penner Date: Wed, 20 Mar 2019 18:15:44 +0100 Subject: [PATCH 21/23] Add prometheus middleware to Galley --- services/galley/package.yaml | 3 +- services/galley/src/Galley/API.hs | 36 ---------------- services/galley/src/Galley/Run.hs | 55 +++++++++++++++++++++++++ services/galley/src/Main.hs | 2 +- services/galley/test/integration/API.hs | 10 +++++ 5 files changed, 67 insertions(+), 39 deletions(-) create mode 100644 services/galley/src/Galley/Run.hs diff --git a/services/galley/package.yaml b/services/galley/package.yaml index e68f139ceee..57c36024406 100644 --- a/services/galley/package.yaml +++ b/services/galley/package.yaml @@ -16,8 +16,7 @@ dependencies: library: source-dirs: src exposed-modules: - - Galley.API - - Galley.App + - Galley.Run - Galley.Options - Galley.Aws - Galley.Data diff --git a/services/galley/src/Galley/API.hs b/services/galley/src/Galley/API.hs index ce6d653d5f5..42339b22348 100644 --- a/services/galley/src/Galley/API.hs +++ b/services/galley/src/Galley/API.hs @@ -1,19 +1,13 @@ module Galley.API where import Imports hiding (head) -import Cassandra (runClient, shutdown) -import Cassandra.Schema (versionCheck) -import Control.Exception (finally) import Control.Lens hiding (enum) import Data.Aeson (encode) import Data.ByteString.Conversion (fromByteString, fromList) import Data.Id (UserId, ConvId) import Data.Metrics.Middleware as Metrics -import Data.Metrics.WaiRoute (treeToPaths) -import Data.Misc import Data.Range import Data.Swagger.Build.Api hiding (def, min, Response) -import Data.Text (unpack) import Data.Text.Encoding (decodeLatin1) import Galley.App import Galley.API.Clients @@ -21,7 +15,6 @@ import Galley.API.Create import Galley.API.Update import Galley.API.Teams import Galley.API.Query -import Galley.Options import Galley.Types (OtrFilterMissing (..)) import Galley.Types.Teams (Perm (..)) import Network.HTTP.Types @@ -32,44 +25,15 @@ import Network.Wai.Routing hiding (route) import Network.Wai.Utilities import Network.Wai.Utilities.ZAuth import Network.Wai.Utilities.Swagger -import Network.Wai.Utilities.Server hiding (serverPort) -import Util.Options -import qualified Control.Concurrent.Async as Async import qualified Data.Predicate as P import qualified Data.Set as Set import qualified Galley.API.Error as Error import qualified Galley.API.Internal as Internal -import qualified Galley.Data as Data import qualified Galley.Queue as Q import qualified Galley.Types.Swagger as Model import qualified Galley.Types.Teams.Swagger as TeamsModel import qualified Network.Wai.Predicate as P -import qualified Network.Wai.Middleware.Gzip as GZip -import qualified Network.Wai.Middleware.Gunzip as GZip -import qualified System.Logger as Log - -run :: Opts -> IO () -run o = do - m <- metrics - e <- createEnv m o - let l = e^.applog - s <- newSettings $ defaultServer (unpack $ o^.optGalley.epHost) - (portNumber $ fromIntegral $ o^.optGalley.epPort) - l - m - runClient (e^.cstate) $ - versionCheck Data.schemaVersion - d <- Async.async $ evalGalley e Internal.deleteLoop - let rtree = compile sitemap - measured = measureRequests m (treeToPaths rtree) - app r k = runGalley e r (route rtree r k) - start = measured . catchErrors l m . GZip.gunzip . GZip.gzip GZip.def $ app - runSettingsWithShutdown s start 5 `finally` do - Async.cancel d - shutdown (e^.cstate) - Log.flush l - Log.close l sitemap :: Routes ApiBuilder Galley () sitemap = do diff --git a/services/galley/src/Galley/Run.hs b/services/galley/src/Galley/Run.hs new file mode 100644 index 00000000000..8c2cfe79843 --- /dev/null +++ b/services/galley/src/Galley/Run.hs @@ -0,0 +1,55 @@ +module Galley.Run (run) where + +import Imports + +import Cassandra (runClient, shutdown) +import Cassandra.Schema (versionCheck) +import Control.Exception (finally) +import Control.Lens ((^.)) +import Data.Metrics.Middleware.Prometheus (waiPrometheusMiddleware) +import Data.Metrics.WaiRoute (treeToPaths) +import Data.Misc (portNumber) +import Data.Text (unpack) +import Network.Wai (Middleware) +import Network.Wai.Utilities.Server +import Util.Options +import qualified Control.Concurrent.Async as Async +import qualified Data.Metrics.Middleware as M +import qualified Network.Wai.Middleware.Gunzip as GZip +import qualified Network.Wai.Middleware.Gzip as GZip +import qualified System.Logger as Log + +import Galley.API (sitemap) +import qualified Galley.API.Internal as Internal +import qualified Galley.App as App +import Galley.App +import qualified Galley.Data as Data +import Galley.Options (Opts, optGalley) + +run :: Opts -> IO () +run o = do + m <- M.metrics + e <- App.createEnv m o + let l = e ^. App.applog + s <- newSettings $ defaultServer (unpack $ o ^. optGalley.epHost) + (portNumber $ fromIntegral $ o ^. optGalley . epPort) + l + m + runClient (e^.cstate) $ + versionCheck Data.schemaVersion + d <- Async.async $ evalGalley e Internal.deleteLoop + let rtree = compile sitemap + app r k = runGalley e r (route rtree r k) + measured :: Middleware + measured = measureRequests m (treeToPaths rtree) + middlewares :: Middleware + middlewares = waiPrometheusMiddleware sitemap + . measured + . catchErrors l m + . GZip.gunzip + . GZip.gzip GZip.def + runSettingsWithShutdown s (middlewares app) 5 `finally` do + Async.cancel d + shutdown (e^.cstate) + Log.flush l + Log.close l diff --git a/services/galley/src/Main.hs b/services/galley/src/Main.hs index 9b96c1fa8de..7e8a06b68ec 100644 --- a/services/galley/src/Main.hs +++ b/services/galley/src/Main.hs @@ -1,7 +1,7 @@ module Main (main) where import Imports -import Galley.API +import Galley.Run (run) import OpenSSL (withOpenSSL) import Util.Options diff --git a/services/galley/test/integration/API.hs b/services/galley/test/integration/API.hs index b5fb64b12e7..ce807c69a1d 100644 --- a/services/galley/test/integration/API.hs +++ b/services/galley/test/integration/API.hs @@ -40,6 +40,7 @@ tests s = testGroup "Galley integration tests" mainTests = testGroup "Main API" [ test s "status" status , test s "monitoring" monitor + , test s "metrics" metrics , test s "create conversation" postConvOk , test s "get empty conversations" getConvsOk , test s "get conversations by ids" getConvsOk2 @@ -108,6 +109,15 @@ monitor = do const 200 === statusCode const (Just "application/json") =~= getHeader "Content-Type" +metrics :: TestM () +metrics = do + g <- view tsGalley + get (g . path "/i/metrics") !!! do + const 200 === statusCode + -- Should contain the request duration metric in its output + const (Just "TYPE http_request_duration_seconds histogram") =~= responseBody + + postConvOk :: TestM () postConvOk = do c <- view tsCannon From 6a16a6efc2daf140d63bd0df74ca2bf5f5796f6f Mon Sep 17 00:00:00 2001 From: Matthias Fischmann Date: Mon, 25 Mar 2019 18:48:21 +0100 Subject: [PATCH 22/23] CHANGELOG.md --- CHANGELOG.md | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 079e8bab5c8..12b32da58e0 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,35 @@ +# 2019-03-25 + +## API changes + + * SCIM delete user endpoint (#660) + * Require reauthentication when creating a SCIM token (#639) + * Disallow duplicate external ids via SCIM update user (#657) + +## Documentation changes + + * Make an index for the docs/ (#662) + * Docs: using scim with curl. (#659) + * Add spar to the arch diagram. (#650) + +## Bug fixes + + * ADFS-workaround for SAML2 authn response signature validation. (#670) + * Fix: empty objects `{}` are valid TeamMemberDeleteData. (#652) + * Better logo rendering in emails (#649) + +## Internal changes + + * Remove some unused instances (#671) + * Reusable wai middleware for prometheus (for Galley only for now) (#669) + * Bump cql-io dep from merge request to latest release. (#661) + * docker image building for all of the docker images our integration tests require. (#622, #668) + * Checking for 404 is flaky; depends on deletion succeeding (#667) + * Refactor Galley Tests to use Reader Pattern (#666) + * Switch Cargohold to YAML-only config (#653) + * Filter newlines in log output. (#642) + + # 2019-02-28 ## API changes From 0c5b6af3f21927efe9e05fed2145346ee25053c8 Mon Sep 17 00:00:00 2001 From: Matthias Fischmann Date: Tue, 26 Mar 2019 09:22:28 +0100 Subject: [PATCH 23/23] Fixup --- CHANGELOG.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 12b32da58e0..4ce9c74b4f4 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,4 +1,4 @@ -# 2019-03-25 +# 2019-03-25 #674 ## API changes @@ -30,7 +30,7 @@ * Filter newlines in log output. (#642) -# 2019-02-28 +# 2019-02-28 #648 ## API changes