Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

During Data Push, sending system should authenticate receiving system as the intended receiver #276

Open
reinkrul opened this issue Jun 10, 2024 · 3 comments

Comments

@reinkrul
Copy link
Member

As a Data Pusher, I want to validate that the (FHIR) endpoint I push data to is owned by the intended receiving organization, so that I can reassure I don't leak data. When pushing data as-is (e.g. HTTP POST to an arbitrary endpoint), the endpoint isn't authenticated as being owned by the intended receiver (care organization) of the data. This can lead to a data leak in at least the following circumstances:

  • Data receiver's sysadmin (by accident or malicious intent):
    • misconfiguring FHIR endpoint (zorgbijjou.eu/fhir) in the DID document
    • misconfiguring DNS
    • misconfiguring reverse proxy (etc…)
  • Data receiver losing control over its domain through a hostile domain takeover by;
    • forgetting to renew it,
    • abandoning it after organizational merger/reorganization
    • attacker changing DNS records
  • Attacker spoofing DNS at the data sender

Note: this is contrary to data flows using Notified Pull, where the data receiver is authenticated through OAuth2.

There are multiple possible mitigation measures:

  • well-known DID configuration so you can link the domain to a DID, which you compare with the one(s) you resolved through the organization URA. Doesn't protect against domain take-over, but the Verifiable Presentation of the Domain Linkage Credential can be reasonably short to protect against those (domain retention period is typically 30 days?)
  • add TLS server certificate to DID document (ugly, non-standard, requires keeping certs in sync).
  • validating that TLS server certificate subject matches a pre-known attribute (e.g. organization name, URA, UZI, KvK-nummer, etc)
    • Fiddly, names might not match depending on authorative registries
    • Requires private PKI systems, so do not scale well
  • encrypting content with a public key of the receiver, so only the intended receiver can read the data
@woutslakhorst
Copy link
Member

This is not limited to just POST. Any server 2 server communication without a standard OAuth2 authorization code flow is subject. A GET is less dangerous since the attacker would need to know the exact return value (eg FHIR/resource) to do harm without raising an error. A POST in any syntax that is fairly human readable could be read.

@woutslakhorst
Copy link
Member

Without going overboard on custom security measures, implement proces security measures:

  • 🟢 any changes to DID documents (or general use with a private key for this purpose) should be audited (4 eyes)
  • 🟢 other misconfigurations would be caught at the other end (authn failure), so errors should be monitored (403s)
  • 🟢 domains must have a 1 year retention. Calls to it must be monitored.
  • 🟢 DNS over HTTPS
  • 🔴 DNS takeover..... would require public-key/certificate pinning to mitigate?

@reinkrul
Copy link
Member Author

Content encryption is always an option for very high-risk POSTs. But I'd say if there's limited PII, above measures might be enough.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants