Clojure

Keyword cache cleanup incurs linear scan of cache

Details

  • Type: Enhancement Enhancement
  • Status: Open Open
  • Priority: Minor Minor
  • Resolution: Unresolved
  • Affects Version/s: Release 1.6
  • Fix Version/s: Release 1.7
  • Component/s: None
  • Patch:
    Code
  • Approval:
    Vetted

Description

If the GC reclaims a keyword, any subsequent attempt to create a keyword requires an O(n) scan over the entire keyword table via Util.clearCache. This is a significant performance cost in keyword-heavy operations; e.g. JSON parsing.

Patch: keyword-cache.diff - patch to defer cleaning till portion of the table is dead and avoid multiple threads cleaning simultaneously.

Patch: kw-clean-future.patch - patch to spin cache cleaning into a future. Found that 1) this introduces some ordering constraints and circularity between Agent and Keyword (fixable) and 2) using the future pool for this means shutdown-agents would always need to be called (in the patch I avoided this by changing agent's soloExecutor to use daemon threads.

Patch: unified-kw-patch.diff - Alternative to keyword-cache and clean-future.patch. Combines all keyword-perf changes, including the EDN kw parser improvement, improved table lookup performance, and has threads cooperate to empty the table refqueue with a minimum of contention.

  1. faster-keywords.diff
    12/May/14 4:23 PM
    28 kB
    Kyle Kingsbury
  2. keyword-cache.diff
    09/Jun/14 1:53 PM
    4 kB
    Alex Miller
  3. kw-clean-future.patch
    09/Jun/14 8:53 PM
    3 kB
    Alex Miller
  4. unified-kw-patch.diff
    22/Jul/14 6:43 PM
    32 kB
    Kyle Kingsbury

Activity

Hide
Alex Miller added a comment -

Any perf-related ticket will need some clear before/after timings (with good methodology and how to repro) and also a consideration of cases where the change may introduce any perf degradation in normal usage.

Show
Alex Miller added a comment - Any perf-related ticket will need some clear before/after timings (with good methodology and how to repro) and also a consideration of cases where the change may introduce any perf degradation in normal usage.
Hide
Kyle Kingsbury added a comment - - edited

I've experimented with a patch reducing the cache clearing cost and removing the need for String.intern. Preliminary results are good, but I want to try a few alternative approaches for cache keys. For instance, could we use pure strings like "foo" and "clojure.core/foo" as the cache keys, removing a level of memory indirection? If we're being really sneaky, we could share those same strings with the Symbol _str field to halve our memory use, assuming it's OK to reach in and mutate it.

https://gist.github.com/aphyr/f72e72992dade4578232
http://imgur.com/a/YSgUa#2

Show
Kyle Kingsbury added a comment - - edited I've experimented with a patch reducing the cache clearing cost and removing the need for String.intern. Preliminary results are good, but I want to try a few alternative approaches for cache keys. For instance, could we use pure strings like "foo" and "clojure.core/foo" as the cache keys, removing a level of memory indirection? If we're being really sneaky, we could share those same strings with the Symbol _str field to halve our memory use, assuming it's OK to reach in and mutate it. https://gist.github.com/aphyr/f72e72992dade4578232 http://imgur.com/a/YSgUa#2
Hide
Alex Miller added a comment -

Great start on this - having the perf data is hugely important. One thing I don't see you've covered yet is what the corresponding memory increase you're incurring with CacheKey to get the benefit - we need to quantify both sides of the tradeoff here (latency/throughput vs memory) to fully judge.

Questions/comments on your patch...

1) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L101 - do we need the (o instanceof CacheKey) check? If the usage of this is constrained then we might be able to skip it (and it will blow up on the next line if something is wrong).

2) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L110 - shouldn't we precompute and save the hash code!? The only thing we're making this for is fast hash comparisons. That hash computation is string length dependent - it ain't cheap.

3) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L126 - have you tested with other values here? Should have some justification for this.

4) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L126 - have you tested with other values here? Should have some justification that this is a reasonable number.

5) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L169 - there is a race here (actually more than one if you include getting the tableSize):

Th1: orphansCount = orphans.get()
Th2: orphansCount = orphans.get()
Th2: orphansNew = orphans.getAndSet(0)
Th2: orphansNew > orphansCount -> start cleaning
<huge gc, 10 zillion orphans are created>
Th1: orphansNew = orphans.getAndSet(0)
Th1: orphansNew > orphansCount -> start cleaning

but I guess this is "safe"; we just have multiple threads cleaning in that case.

6) In general it seems pretty sloppy (I don't mean that pejoratively) how the orphans, rq, and cleaning thread are working together and may be out of sync. I get it and I even believe it will work (usually) but I worry about timing issues that I am too dumb to think of.

Is there a simpler strategy? What if every Nth call to intern() cleaned M entries (or up to M% of table)?

7) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L177 - if you made the iterator explicit in this loop, it would be safe to call iterator.remove() instead of table.remove() and I believe that would be faster on CHM.

8) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L198 - could have two versions of this with/without the symbol. Not sure if that would be faster but they might both inline better into their callers then?

9) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L242 - what's the use case for finding an external CacheKey? Fast re-lookup for specialized use? Do we want to commit to this in the API?

Show
Alex Miller added a comment - Great start on this - having the perf data is hugely important. One thing I don't see you've covered yet is what the corresponding memory increase you're incurring with CacheKey to get the benefit - we need to quantify both sides of the tradeoff here (latency/throughput vs memory) to fully judge. Questions/comments on your patch... 1) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L101 - do we need the (o instanceof CacheKey) check? If the usage of this is constrained then we might be able to skip it (and it will blow up on the next line if something is wrong). 2) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L110 - shouldn't we precompute and save the hash code!? The only thing we're making this for is fast hash comparisons. That hash computation is string length dependent - it ain't cheap. 3) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L126 - have you tested with other values here? Should have some justification for this. 4) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L126 - have you tested with other values here? Should have some justification that this is a reasonable number. 5) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L169 - there is a race here (actually more than one if you include getting the tableSize): Th1: orphansCount = orphans.get() Th2: orphansCount = orphans.get() Th2: orphansNew = orphans.getAndSet(0) Th2: orphansNew > orphansCount -> start cleaning <huge gc, 10 zillion orphans are created> Th1: orphansNew = orphans.getAndSet(0) Th1: orphansNew > orphansCount -> start cleaning but I guess this is "safe"; we just have multiple threads cleaning in that case. 6) In general it seems pretty sloppy (I don't mean that pejoratively) how the orphans, rq, and cleaning thread are working together and may be out of sync. I get it and I even believe it will work (usually) but I worry about timing issues that I am too dumb to think of. Is there a simpler strategy? What if every Nth call to intern() cleaned M entries (or up to M% of table)? 7) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L177 - if you made the iterator explicit in this loop, it would be safe to call iterator.remove() instead of table.remove() and I believe that would be faster on CHM. 8) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L198 - could have two versions of this with/without the symbol. Not sure if that would be faster but they might both inline better into their callers then? 9) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L242 - what's the use case for finding an external CacheKey? Fast re-lookup for specialized use? Do we want to commit to this in the API?
Hide
Kyle Kingsbury added a comment -

1) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L101 - do we need the (o instanceof CacheKey) check? If the usage of this is constrained then we might be able to skip it (and it will blow up on the next line if something is wrong).

I'm usually wary of violating equality/hashCode contracts, and this doesn't even appear as a blip on the radar in profiling. I think we could drop it

2) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L110 - shouldn't we precompute and save the hash code!? The only thing we're making this for is fast hash comparisons. That hash computation is string length dependent - it ain't cheap.

It's less memoizable than you might think; each CacheKey is only indexed a few times, and only at query time; it also doesn't help us for equality checks, since those only occur after hashing. I can add a memoizing field for it at the cost of another 32 bits/kw; we'll see how it impacts performance.

3) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L126 - have you tested with other values here? Should have some justification for this.

I experimented with several values on the Clojure test suite, benchmarks, and some real-world hadoop code. Diminishing returns, as you'd expect. 0.1 and 0.5 are essentially identical in runtime tradeoff. We could drop to 0.01 if desired; it'll only make a difference in large (10-100K) extant keyword benchmarks, as far as I can tell.

4) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L126 - have you tested with other values here? Should have some justification that this is a reasonable number.

Same question as #3?

5) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L169 - there is a race here (actually more than one if you include getting the tableSize):

Th1: orphansCount = orphans.get()
Th2: orphansCount = orphans.get()
Th2: orphansNew = orphans.getAndSet(0)
Th2: orphansNew > orphansCount -> start cleaning
<huge gc, 10 zillion orphans are created>
Th1: orphansNew = orphans.getAndSet(0)
Th1: orphansNew > orphansCount -> start cleaning

but I guess this is "safe"; we just have multiple threads cleaning in that case.

Yep. This check is only there as an optimization--and note that if a huge GC occurs, it's likely we want to schedule a followup traversal of the table anyway, because the thread that's already cleaned some of the table has probably missed some subsequently GC'ed elements. The number of concurrently cleaning threads is bounded by the rate of GC churn, and in the most pathological case (sadly, I haven't been able to produce this race experimentally), this degenerates to the old Clojure behavior of every thread doing a full scan.

6) In general it seems pretty sloppy (I don't mean that pejoratively) how the orphans, rq, and cleaning thread are working together and may be out of sync. I get it and I even believe it will work (usually) but I worry about timing issues that I am too dumb to think of.

Is there a simpler strategy? What if every Nth call to intern() cleaned M entries (or up to M% of table)?

Every nth call is just fine, but it degrades more poorly for large tables. In general, I try to lean towards scale-invariant solutions, which is why I aimed to reclaim roughly a tenth of the entries in the map every time. Maybe more, maybe less, depending on CAS retries, delayed threads resetting the counter to zero, etc.

Doing M entries or M% is more tricky; gotta figure out which threads collect what fraction when, how you efficiently access that subsection of the hash, make sure elements don't fall through the cracks, etc.

7) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L177 - if you made the iterator explicit in this loop, it would be safe to call iterator.remove() instead of table.remove() and I believe that would be faster on CHM.

I agree. I figured Rich had a good reason for doing it this way, but if you concur I'll change it.

8) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L198 - could have two versions of this with/without the symbol. Not sure if that would be faster but they might both inline better into their callers then?

I agree. We can do that dispatch statically and cut down on branch misprediction, too.

9) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L242 - what's the use case for finding an external CacheKey? Fast re-lookup for specialized use? Do we want to commit to this in the API?

Keep forgetting Java's obsession with encapsulation. I'll privatize.

Show
Kyle Kingsbury added a comment - 1) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L101 - do we need the (o instanceof CacheKey) check? If the usage of this is constrained then we might be able to skip it (and it will blow up on the next line if something is wrong). I'm usually wary of violating equality/hashCode contracts, and this doesn't even appear as a blip on the radar in profiling. I think we could drop it 2) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L110 - shouldn't we precompute and save the hash code!? The only thing we're making this for is fast hash comparisons. That hash computation is string length dependent - it ain't cheap. It's less memoizable than you might think; each CacheKey is only indexed a few times, and only at query time; it also doesn't help us for equality checks, since those only occur after hashing. I can add a memoizing field for it at the cost of another 32 bits/kw; we'll see how it impacts performance. 3) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L126 - have you tested with other values here? Should have some justification for this. I experimented with several values on the Clojure test suite, benchmarks, and some real-world hadoop code. Diminishing returns, as you'd expect. 0.1 and 0.5 are essentially identical in runtime tradeoff. We could drop to 0.01 if desired; it'll only make a difference in large (10-100K) extant keyword benchmarks, as far as I can tell. 4) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L126 - have you tested with other values here? Should have some justification that this is a reasonable number. Same question as #3? 5) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L169 - there is a race here (actually more than one if you include getting the tableSize): Th1: orphansCount = orphans.get() Th2: orphansCount = orphans.get() Th2: orphansNew = orphans.getAndSet(0) Th2: orphansNew > orphansCount -> start cleaning <huge gc, 10 zillion orphans are created> Th1: orphansNew = orphans.getAndSet(0) Th1: orphansNew > orphansCount -> start cleaning but I guess this is "safe"; we just have multiple threads cleaning in that case. Yep. This check is only there as an optimization--and note that if a huge GC occurs, it's likely we want to schedule a followup traversal of the table anyway, because the thread that's already cleaned some of the table has probably missed some subsequently GC'ed elements. The number of concurrently cleaning threads is bounded by the rate of GC churn, and in the most pathological case (sadly, I haven't been able to produce this race experimentally), this degenerates to the old Clojure behavior of every thread doing a full scan. 6) In general it seems pretty sloppy (I don't mean that pejoratively) how the orphans, rq, and cleaning thread are working together and may be out of sync. I get it and I even believe it will work (usually) but I worry about timing issues that I am too dumb to think of. Is there a simpler strategy? What if every Nth call to intern() cleaned M entries (or up to M% of table)? Every nth call is just fine, but it degrades more poorly for large tables. In general, I try to lean towards scale-invariant solutions, which is why I aimed to reclaim roughly a tenth of the entries in the map every time. Maybe more, maybe less, depending on CAS retries, delayed threads resetting the counter to zero, etc. Doing M entries or M% is more tricky; gotta figure out which threads collect what fraction when, how you efficiently access that subsection of the hash, make sure elements don't fall through the cracks, etc. 7) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L177 - if you made the iterator explicit in this loop, it would be safe to call iterator.remove() instead of table.remove() and I believe that would be faster on CHM. I agree. I figured Rich had a good reason for doing it this way, but if you concur I'll change it. 8) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L198 - could have two versions of this with/without the symbol. Not sure if that would be faster but they might both inline better into their callers then? I agree. We can do that dispatch statically and cut down on branch misprediction, too. 9) https://gist.github.com/aphyr/f72e72992dade4578232#file-gistfile1-diff-L242 - what's the use case for finding an external CacheKey? Fast re-lookup for specialized use? Do we want to commit to this in the API? Keep forgetting Java's obsession with encapsulation. I'll privatize.
Hide
Alex Miller added a comment -

On several of these - 2, 7, 8 - I think those are worth a test. If faster, we should consider.

On 9, I thought maybe you were opening it up so it would be possible to save off a CacheKey and reuse it or something else. If it's not needed externally, then might be good to private-ize CacheKey itself so we can change it later.

Show
Alex Miller added a comment - On several of these - 2, 7, 8 - I think those are worth a test. If faster, we should consider. On 9, I thought maybe you were opening it up so it would be possible to save off a CacheKey and reuse it or something else. If it's not needed externally, then might be good to private-ize CacheKey itself so we can change it later.
Hide
Kyle Kingsbury added a comment - - edited

http://imgur.com/a/1bv3P#0
https://gist.github.com/aphyr/f72e72992dade4578232

These charts show the performance impact of several changes. In order, they are:

1.7                 baseline
kw                  initial patch
kw-static-paths     Separate codepaths for interning symbols vs strings. Iterator
                    .remove for cache cleaning. Fix a bug for null comparisons
                    in CacheKey namespaces. Internal functions now protected, not
                    public. Not much performance impact.
kw-memo-hash        Memoize hashcodes for CacheKeys. Performance is a wash.
kw-string-cachekeys Observing that String.indexOf('/') consumed a significant 
                    fraction of interning time, use a combined "ns/name" string for
                    Cachekeys instead of separate strings. Significant performance 
                    boost in all tests; 40% reduction in median latencies in 1000-
                    kw allocation test, for instance.
kw-string-keys      Use raw strings for CacheKeys. Improves performance by removing
                    a level of memory indirection, even without cached hashcodes.
kw-interned-keys    Intern those strings to reduce memory consumption, sharing
                    them with the underlying symbol's strings. Slightly slower.

Performance is even better now. Creating 1000 keywords median latency changed from 900 to 200 micros; .999s lower, throughput from 4000 to 20,000/second. JSON parsing median latency fell from 170 micros to 100 micros; throughput doubled from 17500 docs/sec to 36,000 docs/sec.

We're still suffering from poor dispersal in ConcurrentHashmap's use of the string hashCode on JDK7/8, but maybe that's a subject for a different patch.

Memory impact is now minimal. We intern every key string in the table, and those strings are interned by the symbols anyway, so they're essentially the same object. For namespaced symbols, we pay a slightly higher cost--forcing the interning of the "ns/name" string instead of deferring it to Symbol.toString() time. For non-namespaced symbols, these strings are interned as a part of the symbol creation process so there's no memory overhead.

At the repl, I tested by allocating and retaining a million keywords:

(def x (mapv keyword (map (partial str "test-kw-") (range 1e6))))

Retained size (bytes)             1.7   string-kw
----------------------------------------------------
Total retained heap        221.    MB  221.    MB
clojure.lang.Symbols       104.820 MB   32.900 MB
clojure.lang.Keywords       24.021 MB   56.049 MB
java.lang.Strings           89.537 MB   81.786 MB
clojure.lang.Keyword class  72.447 MB   72.451 MB

Total memory use is unchanged, but note that clojure.lang.Symbol retains less, since its strings are now shared by the Keyword table. Keywords, by contrast, retains more. Strings and the keyword table are essentially unchanged.

Show
Kyle Kingsbury added a comment - - edited http://imgur.com/a/1bv3P#0 https://gist.github.com/aphyr/f72e72992dade4578232 These charts show the performance impact of several changes. In order, they are:
1.7                 baseline
kw                  initial patch
kw-static-paths     Separate codepaths for interning symbols vs strings. Iterator
                    .remove for cache cleaning. Fix a bug for null comparisons
                    in CacheKey namespaces. Internal functions now protected, not
                    public. Not much performance impact.
kw-memo-hash        Memoize hashcodes for CacheKeys. Performance is a wash.
kw-string-cachekeys Observing that String.indexOf('/') consumed a significant 
                    fraction of interning time, use a combined "ns/name" string for
                    Cachekeys instead of separate strings. Significant performance 
                    boost in all tests; 40% reduction in median latencies in 1000-
                    kw allocation test, for instance.
kw-string-keys      Use raw strings for CacheKeys. Improves performance by removing
                    a level of memory indirection, even without cached hashcodes.
kw-interned-keys    Intern those strings to reduce memory consumption, sharing
                    them with the underlying symbol's strings. Slightly slower.
Performance is even better now. Creating 1000 keywords median latency changed from 900 to 200 micros; .999s lower, throughput from 4000 to 20,000/second. JSON parsing median latency fell from 170 micros to 100 micros; throughput doubled from 17500 docs/sec to 36,000 docs/sec. We're still suffering from poor dispersal in ConcurrentHashmap's use of the string hashCode on JDK7/8, but maybe that's a subject for a different patch. Memory impact is now minimal. We intern every key string in the table, and those strings are interned by the symbols anyway, so they're essentially the same object. For namespaced symbols, we pay a slightly higher cost--forcing the interning of the "ns/name" string instead of deferring it to Symbol.toString() time. For non-namespaced symbols, these strings are interned as a part of the symbol creation process so there's no memory overhead. At the repl, I tested by allocating and retaining a million keywords:
(def x (mapv keyword (map (partial str "test-kw-") (range 1e6))))

Retained size (bytes)             1.7   string-kw
----------------------------------------------------
Total retained heap        221.    MB  221.    MB
clojure.lang.Symbols       104.820 MB   32.900 MB
clojure.lang.Keywords       24.021 MB   56.049 MB
java.lang.Strings           89.537 MB   81.786 MB
clojure.lang.Keyword class  72.447 MB   72.451 MB
Total memory use is unchanged, but note that clojure.lang.Symbol retains less, since its strings are now shared by the Keyword table. Keywords, by contrast, retains more. Strings and the keyword table are essentially unchanged.
Hide
Kyle Kingsbury added a comment -

I can't figure out how to edit the ticket description, but I updated the same gist with the cumulative changes. Comments welcome!

Show
Kyle Kingsbury added a comment - I can't figure out how to edit the ticket description, but I updated the same gist with the cumulative changes. Comments welcome!
Hide
Alex Miller added a comment -

Excellent, thanks for the data. I added a group to your auth so I think you should be able to edit descriptions now. If not, let me know. I'll re-review the patch next week. It would be good either at this point or after that to turn this into an actual patch file instead of a gist.

Show
Alex Miller added a comment - Excellent, thanks for the data. I added a group to your auth so I think you should be able to edit descriptions now. If not, let me know. I'll re-review the patch next week. It would be good either at this point or after that to turn this into an actual patch file instead of a gist.
Hide
Kyle Kingsbury added a comment -

I've attached a cumulative patch. It's comprised of 8 commits; one for each stage we've discussed. I can rebase into a single commit if you'd like.

Show
Kyle Kingsbury added a comment - I've attached a cumulative patch. It's comprised of 8 commits; one for each stage we've discussed. I can rebase into a single commit if you'd like.
Hide
Alex Miller added a comment -

I would like a single cumulative rebased patch. I hope to have some time to look at it today.

Show
Alex Miller added a comment - I would like a single cumulative rebased patch. I hope to have some time to look at it today.
Hide
Alex Miller added a comment -

On another look, I think it would be useful to separate this ticket into two parts - the first is about optimizing keyword creation and lookup to avoid unnecessary work (avoiding symbol creation and interning, using Strings as keys in the cache). The second part is really about optimizing cache clearing. Do you think these can be separated into two tickets?

Regarding the cache clearing part, have you tested a simpler strategy of just counting calls to clearCache() and running the clean scan every N calls? If that was almost as good, I'd be in favor of that over what is in the patch.

The kw-static paths version did not seem to be an improvement - perhaps you should try pulling them back together to simplify the code? Only worth it if there is a real improvement from it.

On the various find methods - if you could retain their ordering and minimize noise in the diffs that would really help make it easier to screen.

Finally, we need to do some tests to verify that we have not altered the performance of using keywords and symbols as keys in a map for lookup.

Show
Alex Miller added a comment - On another look, I think it would be useful to separate this ticket into two parts - the first is about optimizing keyword creation and lookup to avoid unnecessary work (avoiding symbol creation and interning, using Strings as keys in the cache). The second part is really about optimizing cache clearing. Do you think these can be separated into two tickets? Regarding the cache clearing part, have you tested a simpler strategy of just counting calls to clearCache() and running the clean scan every N calls? If that was almost as good, I'd be in favor of that over what is in the patch. The kw-static paths version did not seem to be an improvement - perhaps you should try pulling them back together to simplify the code? Only worth it if there is a real improvement from it. On the various find methods - if you could retain their ordering and minimize noise in the diffs that would really help make it easier to screen. Finally, we need to do some tests to verify that we have not altered the performance of using keywords and symbols as keys in a map for lookup.
Hide
Kyle Kingsbury added a comment - - edited

> On another look, I think it would be useful to separate this ticket into two parts - the first is about optimizing keyword creation and lookup to avoid unnecessary work (avoiding symbol creation and interning, using Strings as keys in the cache). The second part is really about optimizing cache clearing. Do you think these can be separated into two tickets?

Created dev.clojure.org/jira/browse/CLJ-1439 for reduced intern cost

> Regarding the cache clearing part, have you tested a simpler strategy of just counting calls to clearCache() and running the clean scan every N calls? If that was almost as good, I'd be in favor of that over what is in the patch.

I'm not confident that this work will be merged, so I'm kinda reticent to go off and spend another N hours doing benchmarks.

> The kw-static paths version did not seem to be an improvement - perhaps you should try pulling them back together to simplify the code? Only worth it if there is a real improvement from it.

It was obsoleted by a later commit; only included it in the benchmark because you asked about the perf impact.

> On the various find methods - if you could retain their ordering and minimize noise in the diffs that would really help make it easier to screen.

Done.

> Finally, we need to do some tests to verify that we have not altered the performance of using keywords and symbols as keys in a map for lookup.

This doesn't touch the lookup path; costs are equivalent.

Show
Kyle Kingsbury added a comment - - edited > On another look, I think it would be useful to separate this ticket into two parts - the first is about optimizing keyword creation and lookup to avoid unnecessary work (avoiding symbol creation and interning, using Strings as keys in the cache). The second part is really about optimizing cache clearing. Do you think these can be separated into two tickets? Created dev.clojure.org/jira/browse/CLJ-1439 for reduced intern cost > Regarding the cache clearing part, have you tested a simpler strategy of just counting calls to clearCache() and running the clean scan every N calls? If that was almost as good, I'd be in favor of that over what is in the patch. I'm not confident that this work will be merged, so I'm kinda reticent to go off and spend another N hours doing benchmarks. > The kw-static paths version did not seem to be an improvement - perhaps you should try pulling them back together to simplify the code? Only worth it if there is a real improvement from it. It was obsoleted by a later commit; only included it in the benchmark because you asked about the perf impact. > On the various find methods - if you could retain their ordering and minimize noise in the diffs that would really help make it easier to screen. Done. > Finally, we need to do some tests to verify that we have not altered the performance of using keywords and symbols as keys in a map for lookup. This doesn't touch the lookup path; costs are equivalent.
Hide
Alex Miller added a comment -

reduced patch with only the keyword cache clearing changes

Show
Alex Miller added a comment - reduced patch with only the keyword cache clearing changes
Hide
Alex Miller added a comment -

Patch that spins cache cleaning into a future

Show
Alex Miller added a comment - Patch that spins cache cleaning into a future
Hide
Kyle Kingsbury added a comment -

Just as a followup: got bit by this issue again in a data analysis project today: JSON parsing thrashes the reference queue really hard. Merging this patch would definitely be appreciated. Yourkit screenshot here: http://aphyr.com/media/clojure-keyword-refqueue.png

Show
Kyle Kingsbury added a comment - Just as a followup: got bit by this issue again in a data analysis project today: JSON parsing thrashes the reference queue really hard. Merging this patch would definitely be appreciated. Yourkit screenshot here: http://aphyr.com/media/clojure-keyword-refqueue.png
Hide
Kyle Kingsbury added a comment - - edited

Oh yeah, once these two are merged, here's a patch that speeds up my EDN parsing-heavy hadoop jobs by 2-3x. Avoids constructing the symbol every time.

--- a/src/jvm/clojure/lang/EdnReader.java
+++ b/src/jvm/clojure/lang/EdnReader.java
@@ -299,10 +299,9 @@ private static Object matchSymbol(String s){
                        return null;
                        }
                boolean isKeyword = s.charAt(0) == ':';
-               Symbol sym = Symbol.intern(s.substring(isKeyword ? 1 : 0));
                if(isKeyword)
-                       return Keyword.intern(sym);
-               return sym;
+                       return Keyword.intern(s.substring(1));
+               return Symbol.intern(s);
                }
        return null;
 }
Show
Kyle Kingsbury added a comment - - edited Oh yeah, once these two are merged, here's a patch that speeds up my EDN parsing-heavy hadoop jobs by 2-3x. Avoids constructing the symbol every time.
--- a/src/jvm/clojure/lang/EdnReader.java
+++ b/src/jvm/clojure/lang/EdnReader.java
@@ -299,10 +299,9 @@ private static Object matchSymbol(String s){
                        return null;
                        }
                boolean isKeyword = s.charAt(0) == ':';
-               Symbol sym = Symbol.intern(s.substring(isKeyword ? 1 : 0));
                if(isKeyword)
-                       return Keyword.intern(sym);
-               return sym;
+                       return Keyword.intern(s.substring(1));
+               return Symbol.intern(s);
                }
        return null;
 }
Hide
Kyle Kingsbury added a comment - - edited
public static void clearCache() {
  if(rq.poll() != null) {
    Agent.soloExecutor.submit(new Runnable() {
      public void run() {
        Util.clearCache(rq,table);
      }
    });
  }
}

This spawns literally hundreds of new threads – 30-40 at a time in my workload – which fight over the referencequeue. Also it causes a fair bit of contention on the executor itself during keyword-heavy allocation-all threads have to synchronize on the executor's queue-but that seems secondary to the cost of the cache-clearing threads themselves.

How about adding a single-thread executor to Agent for these sorts of housekeeping jobs?

Show
Kyle Kingsbury added a comment - - edited
public static void clearCache() {
  if(rq.poll() != null) {
    Agent.soloExecutor.submit(new Runnable() {
      public void run() {
        Util.clearCache(rq,table);
      }
    });
  }
}
This spawns literally hundreds of new threads – 30-40 at a time in my workload – which fight over the referencequeue. Also it causes a fair bit of contention on the executor itself during keyword-heavy allocation-all threads have to synchronize on the executor's queue-but that seems secondary to the cost of the cache-clearing threads themselves. How about adding a single-thread executor to Agent for these sorts of housekeeping jobs?
Hide
Alex Miller added a comment -

I actually built another patch that did exactly that but never got around to attaching it; a single-threaded executor reserved solely for cache clearing. I tried actually making it completely independent but I found it was pretty easy for it to fall behind - it needs to be connected into the construction process to avoid blowing the queue up too big.

I have not been able to build an isolated test that actually showed any significant performance difference with just your cache-clearing change (what's currently attached as keyword-cache.diff) and not the other changes. I had many problems getting your test code to run but when I did get something to run I was not able to see any significant difference with just the keyword-cache.diff.

Show
Alex Miller added a comment - I actually built another patch that did exactly that but never got around to attaching it; a single-threaded executor reserved solely for cache clearing. I tried actually making it completely independent but I found it was pretty easy for it to fall behind - it needs to be connected into the construction process to avoid blowing the queue up too big. I have not been able to build an isolated test that actually showed any significant performance difference with just your cache-clearing change (what's currently attached as keyword-cache.diff) and not the other changes. I had many problems getting your test code to run but when I did get something to run I was not able to see any significant difference with just the keyword-cache.diff.
Hide
Kyle Kingsbury added a comment -

Managed to eliminate the refqueue contention by having only one thread involved in GCing at a time. Also doesn't require messing with background threads, and is less susceptible to the queue-overflow problem. Since the various extant patches don't apply cleanly on top of each other, I've re-written them in unified-kw-patch.diff, attached. Roughly doubles throughput compared to your patch, at least on a 24-core xeon running openjdk7.

http://aphyr.com/media/clojure-reduced-kw-refqueue-contention.png

Can you please reconsider merging? I've put over a hundred hours into writing, testing, and refining this patchset; it's been stable in production for months and has made a dramatic difference in several of our data-heavy Clojure programs.

Show
Kyle Kingsbury added a comment - Managed to eliminate the refqueue contention by having only one thread involved in GCing at a time. Also doesn't require messing with background threads, and is less susceptible to the queue-overflow problem. Since the various extant patches don't apply cleanly on top of each other, I've re-written them in unified-kw-patch.diff, attached. Roughly doubles throughput compared to your patch, at least on a 24-core xeon running openjdk7. http://aphyr.com/media/clojure-reduced-kw-refqueue-contention.png Can you please reconsider merging? I've put over a hundred hours into writing, testing, and refining this patchset; it's been stable in production for months and has made a dramatic difference in several of our data-heavy Clojure programs.
Hide
Alex Miller added a comment -

Hey Kyle, I appreciate the time you've put into this. However, having a big giant patch tuned on a single use case is not an effective way to evolve the language. We need to separate and describe problems, then explore the solution space for each one, as independently as possible, while considering the impacts on all other use cases.

This particular ticket is concerned solely with the linear cleanup of the reference queue. Can you split out just a patch that deals with this issue? It would be helpful to have a test that demonstrates the performance problem and how this patch address the problem. My testing so far with the prior patch did not demonstrate any improvement.

It would also be helpful to have a squashed version of the complement of the changes related to interning on CLJ-1439 for consideration of that as a separate problem. (And maybe there is further splitting that could be done; I have not looked closely at the interning changes.)

Show
Alex Miller added a comment - Hey Kyle, I appreciate the time you've put into this. However, having a big giant patch tuned on a single use case is not an effective way to evolve the language. We need to separate and describe problems, then explore the solution space for each one, as independently as possible, while considering the impacts on all other use cases. This particular ticket is concerned solely with the linear cleanup of the reference queue. Can you split out just a patch that deals with this issue? It would be helpful to have a test that demonstrates the performance problem and how this patch address the problem. My testing so far with the prior patch did not demonstrate any improvement. It would also be helpful to have a squashed version of the complement of the changes related to interning on CLJ-1439 for consideration of that as a separate problem. (And maybe there is further splitting that could be done; I have not looked closely at the interning changes.)
Hide
Alex Miller added a comment -

The EdnReader changes, for example, should be a separate ticket.

Show
Alex Miller added a comment - The EdnReader changes, for example, should be a separate ticket.
Hide
Kyle Kingsbury added a comment -

Could you at least merge dev.clojure.org/jira/browse/CLJ-1439 first? I split it into a separate ticket over a month ago and these changes depend on it.

Show
Kyle Kingsbury added a comment - Could you at least merge dev.clojure.org/jira/browse/CLJ-1439 first? I split it into a separate ticket over a month ago and these changes depend on it.
Hide
Alex Miller added a comment -

I would be happy to consider CLJ-1439 first. Can you update the patch there to be current and focused on the intern/cache?

Show
Alex Miller added a comment - I would be happy to consider CLJ-1439 first. Can you update the patch there to be current and focused on the intern/cache?
Hide
Kyle Kingsbury added a comment -

The patch is current, and it is focused on the intern/cache.

Show
Kyle Kingsbury added a comment - The patch is current, and it is focused on the intern/cache.

People

Vote (6)
Watch (7)

Dates

  • Created:
    Updated: