-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Cache
To provide results from cache where available.
The Polly CachePolicy
is an implementation of read-through cache, also known as the cache-aside pattern. Providing results from cache where possible reduces overall call duration and can reduce network traffic.
Retrieving a result from an in-memory cache can eliminate a downstream call entirely. A distributed cache can be used to provide a shared cache across upstream nodes; to retrieve values from a nearer-by network resource than the underlying called system might be; or where caching requirements exceed in-memory storage.
Polly CachePolicy
operates in conjunction with an ISyncCacheProvider
or IAsyncCacheProvider
implementation.
The following existing implementations are available via separate nuget packages:
Package | Description | Supported targets |
---|---|---|
Polly.Caching.Memory (nuget for Polly >=6.0.1 ; nuget for Polly <=5.9.0 ; github and doco) |
An in-memory cache implementation using the standard .NET Framework / .NET Core MemoryCache providers. |
.NET 4.0 .NET 4.5 .NET Standard 1.1 (supports .NET Core and Xamarin) .NET Standard 2.0 |
Polly.Caching.Distributed (nuget ; github and doco) |
Supports any implementation of .NET Core's IDistributedCache , including the Redis implementation and SQL-server-based implementations that Microsoft provides. |
.NET Standard 1.1 (supports .NET Core and Xamarin) .NET Standard 2.0 |
Use the below compatibility grid to select the correct version of cache providers to use with your version of Polly:
Version of Polly | Version of above cache providers to use |
---|---|
to V6 | v2 |
v7 onwards | v3 onwards |
New cache providers can also be implemented against the easy to fulfil ISyncCacheProvider
and IAsyncCacheProvider
interfaces.
- The cache key to use is determined according to the supplied (or default) cache key strategy.
- Where the cache holds a value under the corresponding key:
- the delegate passed to
.Execute(...)
or similar is not called - the value from cache is returned instead.
- the delegate passed to
- Where the cache does not hold a result under the corresponding key:
- the delegate passed to
.Execute(...)
or similar is called as usual - the retrieved value is put in the cache, using the configured time-to-live strategy
- the retrieved value is returned.
- the delegate passed to
CachePolicy cache = Policy
.Cache(ISyncCacheProvider cacheProvider
, TimeSpan ttl | ITtlStrategy ttlStrategy
[, ICacheKeyStrategy cacheKeyStrategy | Func<Context, string> cacheKeyStrategy]
[, Action<Context, string, Exception> onCacheError]
|
[, Action<Context, string> onCacheGet
, Action<Context, string> onCacheMiss
, Action<Context, string> onCachePut
, Action<Context, string, Exception> onCacheGetError
, Action<Context, string, Exception> onCachePutError]
);
CachePolicy cache = Policy
.CacheAsync(IAsyncCacheProvider cacheProvider
, TimeSpan ttl | ITtlStrategy ttlStrategy
[, ICacheKeyStrategy cacheKeyStrategy | Func<Context, string> cacheKeyStrategy]
[, Action<Context, string, Exception> onCacheError]
|
[, Action<Context, string> onCacheGet
, Action<Context, string> onCacheMiss
, Action<Context, string> onCachePut
, Action<Context, string, Exception> onCacheGetError
, Action<Context, string, Exception> onCachePutError]
);
CachePolicy<TResult> cache = Policy
.Cache<TResult>(ISyncCacheProvider<TResult> cacheProvider, /* etc */);
CachePolicy<TResult> cache = Policy
.CacheAsync<TResult>(IAsyncCacheProvider<TResult> cacheProvider, /* etc */);
Like any other policy, a CachePolicy may be re-used across multiple call sites. Specify the key to be used for caching for the particular execution by passing a Context
instance in to the execution: the OperationKey
(prior v6: ExecutionKey
) on the Context
instance is the key that will be used for caching.
TResult result = await cachePolicy.ExecuteAsync(context => getFoo(), new Context("FooKey"));
The above pattern assumes you are using the default CacheKeyStrategy.
You can also specify a custom cache key strategy when configuring the policy, to vary this behaviour.
cacheProvider
: The underlying cache provider to use.
CachePolicy
must be used in conjunction with an ISyncCacheProvider
or IAsyncCacheProvider
implementation: existing providers are available via Nuget (see above) or you may implement your own.
Serializers (see below) can also be used with cache providers, to serialize execution TResult
types to TCache
types required by cache providers.
The same cacheProvider
and serializer instances may be used across multiple call sites.
TimeSpan ttl
: Time-to-live (ttl) for the cache item, as a relative, non-sliding duration from the moment the item is put in the cache.
For example, if TimeSpan.FromMinutes(5)
is passed, the cacheProvider
should consider the item valid for 5 minutes.
ITtlStrategy ttlStrategy
: offers ttl strategies beyond the simple TimeSpan ttl
above.
RelativeTtl(TimeSpan ttl)
: equivalent to ttl
above.
AbsoluteTtl(DateTimeOffset absoluteExpirationTime)
: indicates that the cacheProvider
should make the cached item expire at the absolute time given.
SlidingTtl(TimeSpan slidingTtl)
: indicates that the cacheProvider
should treat the cached item as having a sliding ttl of the specified timespan. For instance, if TimeSpan.FromMinutes(5)
is passed, the cacheProvider
should consider the item valid for a further 5 minutes, each time the cache item is touched.
ContextualTtl
: specifies that the execution should take the ttl
from a property on the Context
passed to execution, context[ContextualTtl.TimeSpanKey]
.
This allows you to define a central cache policy that will use varying ttls in different call sites, by placing the desired ttl on Polly's execution context. For example:
context[ContextualTtl.TimeSpanKey] = TimeSpan.FromMinutes(5);
context[ContextualTtl.SlidingExpirationKey] = true; // if desired; if not set, false is assumed
ResultTtl
: specifies a function that will be used to calculate the ttl
based on the TResult
item to be cached. This is useful in any scenario where the result indicates (perhaps via a header) how long the result should be cached for. An example can be where a call obtains an authorisation token and the call result also tells you how long that token is valid.
ResultTtl(Func<TResult, Ttl>)
: specifies a function to calculate the Ttl based on the TResult
item being cached.
ResultTtl(Func<Context, TResult, Ttl>)
: specifies a function to calculate the Ttl based on the TResult
item being cached and the execution Context
.
If no cacheKeyStrategy
is specified, the cache key to use is taken as the OperationKey
property on the execution Context
, ie context.OperationKey
. For example:
TResult result = await cache.ExecuteAsync(async context => await getFooAsync(), new Context("FooKey")); // "FooKey" is the cache key to use in this execution.
If context.OperationKey
is not specified (no Context
is passed to the execution, or context.OperationKey
is not set), caching behaviour is ignored, and the underlying delegate passed to .Execute(...)
(or similar) is called.
Prior v6, OperationKey
was named ExecutionKey
.
Func<Context, string> cacheKeyStrategy
: allows the specification of a custom strategy for using a more specific cache key in the execution. For instance, to cache items obtained through the execution by some guid:
// configuration
CachePolicy cache = Policy.CacheAsync(cacheProvider, TimeSpan.FromMinutes(5), context => context.OperationKey + context["guid"]);
// usage, elsewhere
Guid guid = ... // from somewhere
Context policyExecutionContext = new Context("GetResource-");
policyExecutionContext["guid"] = guid.ToString();
TResult result = await cache.ExecuteAsync(async context => await getResourceAsync(guid), policyExecutionContext); // "Resource-SomeGuid" is the key used in this execution, if guid == SomeGuid.
ICacheKeyStrategy cacheKeyStrategy
: is available as a parameter in some overloads, for more complex funcs.
Some cache providers (such as Redis) store items as specific types (eg string
or byte[]
), requiring you to serialize more complex types to those.
The following existing serializers are available for use with Polly policies:
Package | Description |
---|---|
Polly.Caching.Serialization.Json (nuget ; github and doco) |
A Newtonsoft.Json-based serializer for serializing any type to JSON string. |
See here for notes on using serializers with Polly CachePolicy
. New serializers are also easy to implement.
An optional onCacheGet
delegate allows specific code to be executed (for example for logging), when a value is retrieved from cache.
An optional onCacheMiss
delegate allows specific code to be executed (for example for logging), when a cache-miss occurs (a value is not found in the cache for the given key).
An optional onCachePut
delegate allows specific code to be executed (for example for logging), after a value has been put to the cache.
An optional onCacheError
delegate allows specific code to be executed (for example for logging), if any call to the underlying cacheProvider
throws an exception. If the onCacheError
delegate is configured, it is used for both onCacheGetError
and onCachePutError
.
The alternative, optional onCacheGetError
delegate is a more specific version of onCacheError
, executed only if get calls to the underlying cacheProvider
throw an exception.
The alternative, optional onCachePutError
delegate is a more specific version of onCacheError
, executed only if put calls to the underlying cacheProvider
throw an exception.
All delegates above take as input parameters the execution Context
and the string
cache key. Error-capturing delegates also take the Exception
thrown by the cache provider.
No exceptions due to caching operations are thrown. If the underlying cacheProvider
throws an exception during a cache operation:
- the exception is passed to the relevant
onCacheError
,onCacheGetError
oronCachePutError
delegate, if configured. - the execution continues. For example, if the underlying
cacheProvider
throws while checking if the cache contained a value for the given key, the execution treats this as a cache-miss, and calls the delegate passed to.Execute(...)
.- In other words, the execution intentionally swallows the exception after having passed the exception to
onCacheError
,onCacheGetError
oronCachePutError
for (say) logging. This is so that caching in itself can never bring the application down.
- In other words, the execution intentionally swallows the exception after having passed the exception to
See guidance on ordering the available policy types in a wrap. CachePolicy
should be usually be placed outermost in a PolicyWrap
, with only FallbackPolicy
outside.
If an execution returning void
is placed through a CachePolicy
, caching operation is silently bypassed (there is no result to cache) rather than an exception thrown. This allows for a CachePolicy
to be included in a PolicyWrap
which might sometimes be used for TResult
-returning executions, sometimes for void
-returning, without exceptions being thrown.
There can be occasions where you want to cache only certain responses of the execution and not others. A typical case is when a CachePolicy governs executions returning HttpResponseMessage
and you want to cache only when HttpResponseMessage.StatusCode == HttpStatusCode.OK
.
This can be achieved by using a ResultTtl
strategy which returns a Ttl
representing TimeSpan.Zero
as the ttl for results you do not wish to cache:
Func<Context, HttpResponseMessage, Ttl> cacheOnly200OKfilter =
(context, result) => new Ttl(
timeSpan: result.StatusCode == HttpStatusCode.OK ? TimeSpan.FromMinutes(5) : TimeSpan.Zero,
slidingExpiration: true
);
IAsyncPolicy<HttpResponseMessage> cacheOnly200OKpolicy =
Policy.CacheAsync<HttpResponseMessage>(
cacheProvider: /* the cache provider you are using */,
ttlStrategy: new ResultTtl(cacheOnly200OKfilter),
onCacheError: /* whatever cache error logging */
); // (or other richer CacheAsync overload taking an ITtlStrategy ttlStrategy)
When the ITtlStrategy
returns TimeSpan.Zero
, the policy skips putting that item to the cache.
Note however some cautions about caching at the HttpResponseMessage
level: see the discussion: Is caching at the HttpResponseMessage
level the right fit?.
At Polly v6, cache policies do not cache result values of default(TResult)
, by analogy with an original convention from .NET Framework that a cache returning null
means no value was found for that key in the cache. An undesirable side effect is that this prevents caching default(TResult)
for value types (for which default(TResult) != null
). At Polly v6, to ensure that you can cache default(TResult)
for a value-type, change the execution type to TResult?
.
From Polly v7.0.0 (with cache providers >=v3.0.0), cache policies permit caching null
and default(TResult)
for all value and reference types.
The internal operation of CachePolicy
is thread-safe: multiple calls may safely be placed concurrently through a policy instance (assuming the configured cacheProvider
implementation is also thread-safe).
CachePolicy
instances may be re-used across multiple call sites.
cacheProvider
instances may be re-used across multiple CachePolicy
s and call sites.
Serializer instances may be re-used across multiple CachePolicy
s and call sites.
When reusing policies, use differing OperationKey
to specify cache key (if DefaultCacheKeyStrategy
is used), and to distinguish different call-site usages within logging and metrics.
- Home
- Polly RoadMap
- Contributing
- Transient fault handling and proactive resilience engineering
- Supported targets
- Retry
- Circuit Breaker
- Advanced Circuit Breaker
- Timeout
- Bulkhead
- Cache
- Rate-Limit
- Fallback
- PolicyWrap
- NoOp
- PolicyRegistry
- Polly and HttpClientFactory
- Asynchronous action execution
- Handling InnerExceptions and AggregateExceptions
- Statefulness of policies
- Keys and Context Data
- Non generic and generic policies
- Polly and interfaces
- Some policy patterns
- Debugging with Polly in Visual Studio
- Unit-testing with Polly
- Polly concept and architecture
- Polly v6 breaking changes
- Polly v7 breaking changes
- DISCUSSION PROPOSAL- Polly eventing and metrics architecture