core.cache

Missing LU and LRU is linear complexity - performance

Details

  • Type: Defect Defect
  • Status: Open Open
  • Priority: Major Major
  • Resolution: Unresolved
  • Affects Version/s: None
  • Fix Version/s: None
  • Component/s: None
  • Environment:
    Mac Oracle JDK, Linux OpenJDK
  • Patch:
    Code

Description

Profiling some code with YourKit showed a hotspot in cache statistics on (miss) for LU and LRU caches.

Basically two issues: (count (keys {....})) is a count of a seq, not efficient. Replaced with a count of the map.

Secondly, (apply f anything) is slow. Profiler showed that the (apply min-key) was really slow. This is mitigated by using a c.d.priority-map instead. On a priority-map, (ffirst {}) or (first (peek {}).

Also inverted the logic for threshold comparison. Since there is a (build-leastness-queue) that populates statistics, should caches should favor codepaths for the state of being full all the time?

  1. priority-map.patch
    04/Sep/12 11:26 AM
    5 kB
    Ghadi Shayban
  2. priority-map-fixed.patch
    12/Sep/12 9:27 AM
    6 kB
    Ghadi Shayban

Activity

Hide
Ghadi Shayban added a comment -

I take back the part about (apply) being slow. I think it's more the fact that it's doing a linear scan on keys every single time.

(apply) does show up a lot in a profiler, but I haven't figured out why. (apply + (range big)) seems to even be slightly faster than (reduce +) on the same range.

Show
Ghadi Shayban added a comment - I take back the part about (apply) being slow. I think it's more the fact that it's doing a linear scan on keys every single time. (apply) does show up a lot in a profiler, but I haven't figured out why. (apply + (range big)) seems to even be slightly faster than (reduce +) on the same range.
Hide
Ghadi Shayban added a comment -

Patch in proper format

Show
Ghadi Shayban added a comment - Patch in proper format

People

Vote (0)
Watch (0)

Dates

  • Created:
    Updated: