-
Notifications
You must be signed in to change notification settings - Fork 76
FAQ
You can start using the API implemented in browsers based on Chromium (it's already available in Chrome stable). Check the instructions. For non-Chromium browsers like Firefox or Safari, use the polyfill.
Most likely - yes.
The main difference between Trusted Types and Content Security Policy (CSP) is in that CSP is an exploit mitigation - it addresses symptoms of the vulnerability. It does not remove the underlying bug from the code (e.g. injection of unsanitized untrusted data into HTML markup), but rather attempts to prevent its exploitation. Depending on the nature of the specific injection, there can still be security issues (see http://lcamtuf.coredump.cx/postxss/).
Trusted Types on the other hand addresses the root cause. They help developers build applications that are fundamentally free of underlying injection bugs to a high degree of confidence.
That said, CSP is a valuable complementary mitigation. For example, Trusted Types can not address server-side injections (reflected / stored XSS), but CSP targets those as well. Given a web framework that helps with setting up and maintaining policies, it's very little effort to deploy, and there's really no reason not to use it in addition to Trusted Types.
Note that it is easy to deploy a CSP that ends up being ineffective (see e.g. CSP Is Dead, Long Live CSP whitepaper or script gadgets research). Please follow the most up-to-date recommendations from this presentation if you want to add CSP to your application.
See also #116.
In principle - no; in practice - yes.
Trusted Types aim to lock down the insecure-by-default parts of the DOM API, that end up causing DOM XSS bugs for web applications out there. Additionally it allows to design applications in a way that isolates the security-relevant code into an orders-of-magnitude smalller, reviewable and controllable fragments. While it is possible that those (user-defined) functions are insecure and introduce DOM XSS, the task of preventing, detecing and fixing them becomes manageable, even for very large applications. And this is what may in practice prevent DOM XSS.
Trusted in this context signifies the fact that the application author is confident that a given value can be safely used with an injection sink - she trusts it does not introduce a vulnerability. That does not imply that the value is indeed safe - that property might be provided by a sanitizer or a validator (that might be used in Trusted Type policies internally; in fact - that's very much a recommended approach).
It's commonly thought that DOM-based XSS can be comprehensively addressed by providing a built-in sanitizer for risky values (e.g. HTML snippets). The reasoning is - if all values are sanitized, XSS can't happen.
The requirement for such built-in sanitizer is that all the sinks need to call it. If the browser does not enforce that, the most common source of security bugs - ommision of calling security rules - remains unaddressed.
However, even when a browser enforces sanitization, we're still left with a problem: web applications legimately use patterns that would be blocked by a naive sanitizer. For example, many applications load scripts dynamically, and some use inline event handlers - or eval()
. SOme applications want to sanitize data not only for DOM XSS prevention, but also to e.g. prevent DOM Clobbering. So every sanitizer needs to be configured for a given web application anyhow, as there needs to be an allowed way of e.g. doing dynamic script loading - and such configuration hooks must exist. Judging from existing sanitizers - the configuration options tend to grow.
Additionally, if the sanitizer is always called by the browser, it has to be one monolithic sanitizer. The complexity of rules of such sanitizer tends to grow linearly with the application. Moreover, the sanitizer behavior needs to be consistent over time as developers expect Web APIs to be stable. What follows is that it becomes tricky to even fix some bypass bugs in the built-in sanitizer, as the code change may cause existing applications to break.
Trusted Types aims to address the problem from a different angle. Instead of focusing on neutralizing the string values by pushing it through a centralized sanitizer, it allows to lock down risky APIs (like the DOM XSS sinks) to only allow certain objects. The security then comes from controlling how those objects are created.
This has several advantages:
- there can be multiple sources (policies) of allowed objects, making securing modular applications easier, and allowing the security rules to be small and isolated from the rest of the application.
- the authors are in control of the rules. That lets them review and lock down the rules, develop and fix the bugs in them together with the application.
- it provides good, type-based primitives to build future Web APIs and user libraries on top of. For example, when we notice that developers struggle with writing secure rules, we may implement a Web API can provide that - and return Trusted Type instances, or policies.
As a sidenote - Trusted Types can use a sanitizer, even a built-in one. For example:
// Content-Security-Policy: require-trusted-types-for 'script'; trusted-types default;
trustedTypes.createPolicy('default', {
createHTML: navigator.sanitizeHTML,
});
does exactly that.