Skip to content

Commit

Permalink
Add feature to handle stale return only after waiting an amount of ti…
Browse files Browse the repository at this point in the history
…me for the onMiss handler to resolve (#3)
  • Loading branch information
Ben Clark authored Jan 21, 2020
1 parent e8dffb5 commit 38f2c2f
Show file tree
Hide file tree
Showing 6 changed files with 182 additions and 22 deletions.
28 changes: 15 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Caching library supporting locked updates and stale return to handle [cache stam

The locking means that when the cache expires, only one process will handle the miss and call the (potentially expensive) re-generation method.

If `returnStale` is true, then all requests for the same key will return a stale version of the cache while it is being regenerated (including the process that is performing the regeneration)
If `returnStale` is true, then it will call the `onMiss` handler in order to update the cache. If it takes longer then `waitTimeMs` then it will return the stale data

If `returnStale` is false (or there is nothing already in the cache), then all requests for that key will wait until the update is complete, and then return the updated version from the cache

Expand All @@ -31,6 +31,7 @@ const myObjectCache = new LeprechaunCache({
lockTtlMs: 6000,
spinMs: 50,
returnStale: true
waitTimeMs: 500
onBackgroundError: e => { console.error(e); }
})

Expand All @@ -43,15 +44,16 @@ await myObjectCache.clear('object-id') //Remove the item from the cache

## Constructor Options

| Option | type | Description |
| ----------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| keyPrefix | string? | Optional prefix that will be added to all keys in the underlying store |
| softTtlMs | number (ms) | Soft TTL (in ms) for storing the items in the cache |
| cacheStore | CacheStore | the underlying KV store to use. Must implement CacheStore interface. A node_redis implementation is included. |
| onMiss | function | callback function that will be called when a value is either not in the cache, or the soft TTL has expired. |
| hardTtlMs | number (ms) | the TTL (in ms) to pass to the cacheStore set method - values should hard-expire after this and should no longer be retrievable from the store |
| lockTtlMs | number (ms) | the TTL (in ms) to pass to the cacheStore lock method. While the onMiss function is called, a lock will be acquired. This defines how long the lock should last. This should be longer than the longest time you expect your onMiss handler to take |
| waitForUnlockMs | number (ms) | if the onMiss function is locked, how long should the client wait for it to unlock before giving up. This is relevant when returnStale is false, or when there is no stale data in the cache |
| spinMs | number (ms) | How many milliseconds to wait before re-attempting to acquire the lock |
| returnStale | boolean | if this is true, when the value is expired (by the soft-ttl, set per-key), the library will return the stale result from the cache while updating the cache in the background. The next attempt to get, after this update has resolved, will then return the new version |
| onBackgroundError | function? | Called if there is any error while performing background tasks (calling the onMiss if returnStale true, or while setting the cache / unlocking after returning the data) |
| Option | type | Description |
| ----------------- | ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| keyPrefix | string? | Optional prefix that will be added to all keys in the underlying store |
| softTtlMs | number (ms) | Soft TTL (in ms) for storing the items in the cache |
| cacheStore | CacheStore | the underlying KV store to use. Must implement CacheStore interface. A node_redis implementation is included. |
| onMiss | function | callback function that will be called when a value is either not in the cache, or the soft TTL has expired. |
| hardTtlMs | number (ms) | the TTL (in ms) to pass to the cacheStore set method - values should hard-expire after this and should no longer be retrievable from the store |
| lockTtlMs | number (ms) | the TTL (in ms) to pass to the cacheStore lock method. While the onMiss function is called, a lock will be acquired. This defines how long the lock should last. This should be longer than the longest time you expect your onMiss handler to take |
| waitForUnlockMs | number (ms) | if the onMiss function is locked, how long should the client wait for it to unlock before giving up. This is relevant when returnStale is false, or when there is no stale data in the cache |
| spinMs | number (ms) | How many milliseconds to wait before re-attempting to acquire the lock |
| returnStale | boolean | if this is true, when the value is expired (by the soft-ttl, set per-key), the library will return the stale result (after waitTimeMs) from the cache while updating the cache in the background |
| waitTimeMs | number (ms) | Optional (default=0) The amount of time to wait for the onMiss handler to resolve before returning the stale data. If 0 then it will always return the stale data if it is expired |
| onBackgroundError | function? | Called if there is any error while performing background tasks (calling the onMiss if returnStale true, or while setting the cache / unlocking after returning the data) |
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "@acuris/leprechaun-cache",
"version": "0.0.7",
"version": "0.0.8",
"private": false,
"description": "Caching library that supports double checked caching and stale returns to avoid stampede and slow responses",
"keywords": [
Expand Down
39 changes: 31 additions & 8 deletions src/leprechaun-cache.ts
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ export class LeprechaunCache<T extends Cacheable = Cacheable> {
private softTtlMs: number
private hardTtlMs: number
private lockTtlMs: number
private waitTimeMs: number
private returnStale: boolean
private spinWaitCount: number
private cacheStore: CacheStore<T>
Expand All @@ -33,6 +34,7 @@ export class LeprechaunCache<T extends Cacheable = Cacheable> {
softTtlMs,
hardTtlMs,
lockTtlMs,
waitTimeMs = 0,
waitForUnlockMs,
cacheStore,
spinMs,
Expand All @@ -43,6 +45,7 @@ export class LeprechaunCache<T extends Cacheable = Cacheable> {
this.hardTtlMs = hardTtlMs
this.softTtlMs = softTtlMs
this.lockTtlMs = lockTtlMs
this.waitTimeMs = waitTimeMs
this.spinWaitCount = Math.ceil(waitForUnlockMs / spinMs)
this.spinMs = spinMs
this.cacheStore = cacheStore
Expand Down Expand Up @@ -81,15 +84,35 @@ export class LeprechaunCache<T extends Cacheable = Cacheable> {
if (!result) {
return this.updateCache(key, ttl, true)
}
if (result.expiresAt < Date.now()) {
const update = this.updateCache(key, ttl, !this.returnStale)
if (this.returnStale) {
update.catch(this.onBackgroundError)
} else {
return update
}

if (result.expiresAt > Date.now()) {
return result.data
}
return result.data

const update = this.updateCache(key, ttl, !this.returnStale)

if (!this.returnStale) {
return update
}

return this.race(update, result.data)
}

private async race(update: Promise<T>, staleData: T): Promise<T> {
update.catch(e => {
this.onBackgroundError(e)
return staleData
})

if (this.waitTimeMs <= 0) {
return staleData
}

const returnStaleAfterWaitTime: Promise<T> = new Promise(resolve => {
setTimeout(resolve, this.waitTimeMs, staleData)
})

return Promise.race([update, returnStaleAfterWaitTime])
}

private async spinLock(key: string): Promise<LockResult> {
Expand Down
1 change: 1 addition & 0 deletions src/types.ts
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ export interface LeprechaunCacheOptions<T extends Cacheable = Cacheable> {
softTtlMs: number
hardTtlMs: number
lockTtlMs: number
waitTtlMs?: number
waitForUnlockMs: number
cacheStore: CacheStore<T>
spinMs: number
Expand Down
67 changes: 67 additions & 0 deletions test/integration/leprechaun-cache-redis.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,73 @@ describe('Leprechaun Cache (integration)', () => {
expect(onMiss).calledTwice
})

it('will return the update if it takes less time than the waitTimeMs handler to resolve', async () => {
const data1 = {
some: 'data'
}
const data2 = {
some: 'new data'
}

const key = 'key'
const onMiss = sandbox.stub().resolves(data1)

const cache = new LeprechaunCache({
softTtlMs: 80,
hardTtlMs: 10000,
waitForUnlockMs: 1000,
spinMs: 50,
lockTtlMs: 1000,
cacheStore,
returnStale: true,
waitTimeMs: 50,
onMiss
})

const result = await cache.get(key)
expect(result).to.deep.equal(data1)
await delay(100) //delay for the ttl

onMiss.resolves(data2)

const result2 = await cache.get(key)
expect(result2).to.deep.equal(data2)
})

it('will return the stale data if it takes longer time than the waitTimeMs handler to resolve', async () => {
const data1 = {
some: 'data'
}
const data2 = {
some: 'new data'
}

const key = 'key'
const onMiss = sandbox.stub().resolves(data1)

const cache = new LeprechaunCache({
softTtlMs: 80,
hardTtlMs: 10000,
waitForUnlockMs: 1000,
spinMs: 50,
lockTtlMs: 1000,
cacheStore,
returnStale: true,
waitTimeMs: 50,
onMiss
})

const result = await cache.get(key)
expect(result).to.deep.equal(data1)
await delay(100) //delay for the ttl

onMiss.returns(new Promise(resolve => setTimeout(resolve, 100, data2)))

const result2 = await cache.get(key)
expect(result2).to.deep.equal(data1)
await delay(100) //short delay to allow the background update to finish
})

it('should spin-lock until the new results are available if the cache is stale and another process is updating it (returnStale false)', async () => {
const data1 = {
some: 'data'
Expand Down
67 changes: 67 additions & 0 deletions test/unit/leprechaun-cache.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -116,6 +116,73 @@ describe('Leprechaun Cache', () => {
expect(onMiss).calledTwice
})

it('will return the update if it takes less time than the waitTimeMs handler to resolve', async () => {
const data1 = {
some: 'data'
}
const data2 = {
some: 'new data'
}

const key = 'key'
const onMiss = sandbox.stub().resolves(data1)

const cache = new LeprechaunCache({
softTtlMs: 80,
hardTtlMs: 10000,
waitForUnlockMs: 1000,
spinMs: 50,
lockTtlMs: 1000,
cacheStore: memoryCacheStore,
returnStale: true,
waitTimeMs: 50,
onMiss
})

const result = await cache.get(key)
expect(result).to.deep.equal(data1)
await delay(100) //delay for the ttl

onMiss.resolves(data2)

const result2 = await cache.get(key)
expect(result2).to.deep.equal(data2)
})

it('will return the stale data if it takes longer time than the waitTimeMs handler to resolve', async () => {
const data1 = {
some: 'data'
}
const data2 = {
some: 'new data'
}

const key = 'key'
const onMiss = sandbox.stub().resolves(data1)

const cache = new LeprechaunCache({
softTtlMs: 80,
hardTtlMs: 10000,
waitForUnlockMs: 1000,
spinMs: 50,
lockTtlMs: 1000,
cacheStore: memoryCacheStore,
returnStale: true,
waitTimeMs: 50,
onMiss
})

const result = await cache.get(key)
expect(result).to.deep.equal(data1)
await delay(100) //delay for the ttl

onMiss.returns(new Promise(resolve => setTimeout(resolve, 100, data2)))

const result2 = await cache.get(key)
expect(result2).to.deep.equal(data1)
await delay(100) //short delay to allow the background update to finish
})

it('should spin-lock until the new results are available if the cache is stale and another process is updating it (returnStale false)', async () => {
const data1 = {
some: 'data'
Expand Down

0 comments on commit 38f2c2f

Please sign in to comment.