<< Back to previous view

[CCACHE-27] Missing LU and LRU is linear complexity - performance Created: 04/Sep/12  Updated: 12/Sep/12

Status: Open
Project: core.cache
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Ghadi Shayban Assignee: Fogus
Resolution: Unresolved Votes: 0
Labels: enhancement, performance
Environment:

Mac Oracle JDK, Linux OpenJDK


Attachments: Text File priority-map-fixed.patch     Text File priority-map.patch    
Patch: Code

 Description   

Profiling some code with YourKit showed a hotspot in cache statistics on (miss) for LU and LRU caches.

Basically two issues: (count (keys {....})) is a count of a seq, not efficient. Replaced with a count of the map.

Secondly, (apply f anything) is slow. Profiler showed that the (apply min-key) was really slow. This is mitigated by using a c.d.priority-map instead. On a priority-map, (ffirst {}) or (first (peek {}).

Also inverted the logic for threshold comparison. Since there is a (build-leastness-queue) that populates statistics, should caches should favor codepaths for the state of being full all the time?



 Comments   
Comment by Ghadi Shayban [ 06/Sep/12 10:49 PM ]

I take back the part about (apply) being slow. I think it's more the fact that it's doing a linear scan on keys every single time.

(apply) does show up a lot in a profiler, but I haven't figured out why. (apply + (range big)) seems to even be slightly faster than (reduce +) on the same range.

Comment by Ghadi Shayban [ 12/Sep/12 9:27 AM ]

Patch in proper format

Generated at Wed Jul 30 16:14:46 CDT 2014 using JIRA 4.4#649-r158309.