<< Back to previous view

[CCACHE-35] Add weak-cache Created: 24/Aug/14  Updated: 24/Aug/14

Status: Open
Project: core.cache
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Peter Fraenkel Assignee: Fogus
Resolution: Unresolved Votes: 0
Labels: None

Patch: Code and Test

 Description   

Makes a weak-cache-factory available. Entries will be eagerly collected by the jvm, rather than only when there's memory pressure.
Implementation is fairly simple. I changed SoftCache to NonStrongCache and added an extra record parameter, which is the function
used to create the reference.
See https://github.com/pnf/core.cache/compare/clojure:master...master






[CCACHE-31] SoftCaches update in-place, other cache types don't Created: 25/Jul/13  Updated: 04/Aug/14

Status: Open
Project: core.cache
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Tassilo Horn Assignee: Fogus
Resolution: Unresolved Votes: 1
Labels: None

Attachments: File immutable.diff    

 Description   

I've successfully used SoftCaches in my project, and now I had the need for another simple cache, so I've created a BasicCache. Thereby, I fell into the trap that while SoftCaches update in-place, BasicCaches don't (and looking at the code, all other cache types except for aforementioned SoftCache don't, too).

user> (def C (cache/soft-cache-factory {}))
#'user/C
user> (cache/miss C :a 1)
{:a #<SoftReference java.lang.ref.SoftReference@2bf31c53>}
user> C
{:a #<SoftReference java.lang.ref.SoftReference@2bf31c53>} ;; updated in-place
user> (def C (cache/basic-cache-factory {}))
#'user/C
user> (cache/miss C :a 1)
{:a 1}
user> C                                                    ;; nope, no update, still initial value
{}

So with every cache type except for SoftCaches, you have to use the return value of `miss` (and `evict`). But the documentation doesn't mention that at all. The typical has?/miss/hit snippets explaining the usage pattern also discard the value of `miss`.

IMHO, a cache is something mutable and should of course update in place. I mean, when you `(def C (some-cache...))`, are you really supposed to use `alter-var-root` or make the cache Var dynamic and use `set!`?



 Comments   
Comment by Ambrose Bonnaire-Sergeant [ 04/Aug/14 1:52 PM ]

Patch distinguishes between mutable and immutable caches, and classifies SoftCache and TTLCache as mutable.





[CCACHE-15] It appears that TTL cache exhibits quadratic performance (+ its evict is buggy) Created: 14/Dec/11  Updated: 27/May/14

Status: Open
Project: core.cache
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Jason Wolfe Assignee: Fogus
Resolution: Unresolved Votes: 0
Labels: None

Attachments: Text File Avoid-scanning-all-ttl-entries.patch    

 Description   

The library looks useful, thanks!

I looked at the code, and unless I'm mistaken, every cache miss seems to result in a full pass over the entire cache to evict old entries. The performance implications of this would be unacceptable for my target application. Replacing the TTL data structure with a persistent analog of a LinkedHashMap and using a take-while instead could fix this problem.

Also, evict seems to pass the cache in twice, rather than the cache and the TTL...



 Comments   
Comment by Fogus [ 15/Dec/11 12:21 PM ]

TTLCache eviction fixed. Patches welcomed for the other change, but we might be able to get away with a sorted map ttl map.

Comment by Kimmo Koskinen [ 22/May/14 12:09 AM ]

Could we use a priority map (https://github.com/clojure/data.priority-map) for the ttl map and iterate it with

 (for [... :while]) 
so that we break out early?

Comment by Kimmo Koskinen [ 25/May/14 5:16 AM ]

I tried using priority-map for the ttl data and take-while instead of filter in key-killer. This would seem to help in the case where the entries in the cache are mostly fresh, since in that case we avoid iterating over all entries in ttl. We'r still in the original complexity class, but this should be an improvement still.

The change is here https://github.com/viesti/core.cache/commit/ca76508ae89ea22ce6551017403d76879805c26c, what do you think about it? I'm planning on getting to test it in our environment (~30000 cache entries, 100-200 queries per second).

Comment by Kimmo Koskinen [ 25/May/14 5:28 AM ]

Here's the change as a patch file too. Replaces filter with take-while in key-killer and uses priority-map to keep ttl data sorted.

Comment by Kimmo Koskinen [ 27/May/14 4:38 AM ]

Hmm, I made another observation. I have a scenario where I combine TTLCache with LUCache so that I keep a bounded number of items in cache and have an expiration time limit too.

The ttl structure in TTLCache isn't aware of this behaviour, so it just accumulates entries that have been evicted from the backing cache.





[CCACHE-32] LIRSCache defect allows it's memory use to grow without bound Created: 10/Oct/13  Updated: 10/Oct/13

Status: Open
Project: core.cache
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Jonathan Coveney Assignee: Fogus
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Hello! I was messing around with the LIRSCache and ran into what I think is a bug.

https://github.com/clojure/core.cache/blob/master/src/main/clojure/clojure/core/cache.clj#L371

In the case of a bunch of misses on unique keys, then the stack S will grow without bound and violate the limit that you give it.

(def C (cache/lirs-cache-factory {} :s-history-limit 2 :q-history-limit 1))
(defn populate [n] (let [pairs (map (fn [x] [x x]) (range n))] (reduce (fn [cache [k v]] (cache/miss cache k v)) C pairs)))
(.toString (populate 10))
"{9 9, 1 1, 0 0}, {0 1, 1 2, 2 3, 3 4, 4 5, 5 6, 6 7, 7 8, 8 9, 9 10}, {9 10}, 10, 2, 1"

You can see that the S stack is growing without bound, and if you do (populate 1000), then it is 1000 pieces large.

I'm not sure what the desired outcome is, but any time we add something to one of the queues, we need to make sure that we're doing the right thing with what is kicked out, unless I'm misinterpreting things.

Thanks for making Clojure awesome!






[CCACHE-27] Missing LU and LRU is linear complexity - performance Created: 04/Sep/12  Updated: 12/Sep/12

Status: Open
Project: core.cache
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Ghadi Shayban Assignee: Fogus
Resolution: Unresolved Votes: 0
Labels: enhancement, performance
Environment:

Mac Oracle JDK, Linux OpenJDK


Attachments: Text File priority-map-fixed.patch     Text File priority-map.patch    
Patch: Code

 Description   

Profiling some code with YourKit showed a hotspot in cache statistics on (miss) for LU and LRU caches.

Basically two issues: (count (keys {....})) is a count of a seq, not efficient. Replaced with a count of the map.

Secondly, (apply f anything) is slow. Profiler showed that the (apply min-key) was really slow. This is mitigated by using a c.d.priority-map instead. On a priority-map, (ffirst {}) or (first (peek {}).

Also inverted the logic for threshold comparison. Since there is a (build-leastness-queue) that populates statistics, should caches should favor codepaths for the state of being full all the time?



 Comments   
Comment by Ghadi Shayban [ 06/Sep/12 10:49 PM ]

I take back the part about (apply) being slow. I think it's more the fact that it's doing a linear scan on keys every single time.

(apply) does show up a lot in a profiler, but I haven't figured out why. (apply + (range big)) seems to even be slightly faster than (reduce +) on the same range.

Comment by Ghadi Shayban [ 12/Sep/12 9:27 AM ]

Patch in proper format





[CCACHE-20] Add some examples to github page Created: 08/Feb/12  Updated: 19/Feb/12

Status: Open
Project: core.cache
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Major
Reporter: Juha Syrjälä Assignee: Fogus
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Could you add some examples on how to use core.cache to github readme?



 Comments   
Comment by Fogus [ 19/Feb/12 6:24 PM ]

Can do. In the meantime checkout how core.memoize uses it at https://github.com/clojure/core.memoize/blob/master/src/main/clojure/clojure/core/memoize.clj#L50

I think I might bring this through fn into core.cache in a more generic way.





[CCACHE-17] Create function backed cache Created: 16/Dec/11  Updated: 19/Dec/11

Status: Open
Project: core.cache
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Fogus Assignee: Fogus
Resolution: Unresolved Votes: 0
Labels: cache, fn-cache, new-feature


 Description   

A cache implementation that is backed by a function that performs some action on a cache miss could serve as a front for any of the existing cache impls.



 Comments   
Comment by Rich Hickey [ 16/Dec/11 9:45 AM ]

It doesn't perform an action per se, it gets a passed key and returns a value, which the cache then caches (associates with the key) and returns. The tricky bit is when the function can't get a value. There needs to be some protocol for communicating that (could be like 3 arg get), and, should the cache be asked again later for the same key, calling the fn again.

Comment by Fogus [ 19/Dec/11 7:29 AM ]

Thanks for the feedback Rich. I believe I understand the subtleties now.





[CCACHE-18] Explore JSR 107- Java Temporary Caching API Created: 19/Dec/11  Updated: 19/Dec/11

Status: Open
Project: core.cache
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Trivial
Reporter: Fogus Assignee: Fogus
Resolution: Unresolved Votes: 0
Labels: jsr-107, research


 Description   

Is this relevant to core.cache? And many other questions answered.

<http://jcp.org/en/jsr/detail?id=107>






[CCACHE-16] Benchmark v0.5.x against Google Guava Created: 16/Dec/11  Updated: 16/Dec/11

Status: Open
Project: core.cache
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Major
Reporter: Fogus Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: benchmark, cache, guava


 Description   

Perform some tests and benchmarks of core.cache and Google Guava's com.google.common.cache library.






[CCACHE-14] Asynchronous Cache Support Created: 13/Dec/11  Updated: 13/Dec/11

Status: Open
Project: core.cache
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Pierre-Yves Ritschard Assignee: Fogus
Resolution: Unresolved Votes: 0
Labels: None


 Description   

If people start implementing the cache protocol on top of external caches: Memcache, Redis, or others, an async version could make sense.

I started toying with an AsyncCacheProtocol which for all functions returning values would take two arities, a standard one which would return an instance of IRef, a second one which would take an extra callback argument to be called with the results. This doesn't solve everything though and async versions of LRU and friends would have to be implemented.

The alternative would be to have additional calls in CacheProtocol, such as async-has? async-lookup which would implement the 2 arity semantics and then rely on the fact that the underlying cache respects some sort of async semantics, since we cannot do that with Associative, maybe another middleman protocol CacheStorage could be used, this way, all external cache providers would have to do is implement a CacheProvider with optional asynchronous support.

I hope at least part of this makes sense.






[CCACHE-11] Add eviction implementation to LIRSCache Created: 08/Dec/11  Updated: 08/Dec/11

Status: Open
Project: core.cache
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Major
Reporter: Fogus Assignee: Fogus
Resolution: Unresolved Votes: 0
Labels: associative, cache, evict, lirs


 Description   

The evict method in the ProtocolCache needs implementation for LIRSCache. I will start initially with a single key eviction method to start. The evict method would form the basis for the associative dissoc which in turn forms the basis for proper limited seeding. LIRS poses an additional complication due to its dual limit lists. Currently only the BasicCache impl has evict.






Generated at Wed Oct 01 01:02:01 CDT 2014 using JIRA 4.4#649-r158309.