core.cache

It appears that TTL cache exhibits quadratic performance (+ its evict is buggy)

Details

  • Type: Defect Defect
  • Status: Open Open
  • Priority: Major Major
  • Resolution: Unresolved
  • Affects Version/s: None
  • Fix Version/s: None
  • Component/s: None
  • Labels:
    None

Description

The library looks useful, thanks!

I looked at the code, and unless I'm mistaken, every cache miss seems to result in a full pass over the entire cache to evict old entries. The performance implications of this would be unacceptable for my target application. Replacing the TTL data structure with a persistent analog of a LinkedHashMap and using a take-while instead could fix this problem.

Also, evict seems to pass the cache in twice, rather than the cache and the TTL...

Activity

Hide
Fogus added a comment -

TTLCache eviction fixed. Patches welcomed for the other change, but we might be able to get away with a sorted map ttl map.

Show
Fogus added a comment - TTLCache eviction fixed. Patches welcomed for the other change, but we might be able to get away with a sorted map ttl map.
Hide
Kimmo Koskinen added a comment -

Could we use a priority map (https://github.com/clojure/data.priority-map) for the ttl map and iterate it with

 (for [... :while]) 
so that we break out early?

Show
Kimmo Koskinen added a comment - Could we use a priority map (https://github.com/clojure/data.priority-map) for the ttl map and iterate it with
 (for [... :while]) 
so that we break out early?
Hide
Kimmo Koskinen added a comment -

I tried using priority-map for the ttl data and take-while instead of filter in key-killer. This would seem to help in the case where the entries in the cache are mostly fresh, since in that case we avoid iterating over all entries in ttl. We'r still in the original complexity class, but this should be an improvement still.

The change is here https://github.com/viesti/core.cache/commit/ca76508ae89ea22ce6551017403d76879805c26c, what do you think about it? I'm planning on getting to test it in our environment (~30000 cache entries, 100-200 queries per second).

Show
Kimmo Koskinen added a comment - I tried using priority-map for the ttl data and take-while instead of filter in key-killer. This would seem to help in the case where the entries in the cache are mostly fresh, since in that case we avoid iterating over all entries in ttl. We'r still in the original complexity class, but this should be an improvement still. The change is here https://github.com/viesti/core.cache/commit/ca76508ae89ea22ce6551017403d76879805c26c, what do you think about it? I'm planning on getting to test it in our environment (~30000 cache entries, 100-200 queries per second).
Hide
Kimmo Koskinen added a comment - - edited

Here's the change as a patch file too. Replaces filter with take-while in key-killer and uses priority-map to keep ttl data sorted.

Show
Kimmo Koskinen added a comment - - edited Here's the change as a patch file too. Replaces filter with take-while in key-killer and uses priority-map to keep ttl data sorted.
Kimmo Koskinen made changes -
Field Original Value New Value
Attachment Avoid-scanning-all-ttl-entries.patch [ 13041 ]
Hide
Kimmo Koskinen added a comment -

Hmm, I made another observation. I have a scenario where I combine TTLCache with LUCache so that I keep a bounded number of items in cache and have an expiration time limit too.

The ttl structure in TTLCache isn't aware of this behaviour, so it just accumulates entries that have been evicted from the backing cache.

Show
Kimmo Koskinen added a comment - Hmm, I made another observation. I have a scenario where I combine TTLCache with LUCache so that I keep a bounded number of items in cache and have an expiration time limit too. The ttl structure in TTLCache isn't aware of this behaviour, so it just accumulates entries that have been evicted from the backing cache.

People

Vote (0)
Watch (2)

Dates

  • Created:
    Updated: