<< Back to previous view

[CLJ-1554] Need to expand tests to cover transducers Created: 07/Oct/14  Updated: 24/Oct/14

Status: Open
Project: Clojure
Component/s: None
Affects Version/s: Release 1.7
Fix Version/s: Release 1.7

Type: Defect Priority: Critical
Reporter: Alex Miller Assignee: Alex Miller
Resolution: Unresolved Votes: 0
Labels: transducers

Attachments: Text File clj-1554-2.patch     Text File clj-1554-3.patch     Text File clj-1554.patch    
Patch: Code and Test
Approval: Vetted

 Description   

Attached patch contains both some generative and example tests for transducers. The generative tests build a series of sequence functions (take 5, filter odd?, etc) and apply them to a random vector of numbers as seq transformations, sequence of transducer, into of transducer, and transduce of transducer. The results are compared.

Note: these tests depend on the patch in CLJ-1349 to run as tests.

Patch: clj-1554-3.patch



 Comments   
Comment by Fogus [ 24/Oct/14 1:44 PM ]

I downloaded and applied this patch and its dependent patch (1349) and ran the tests. The coverage is a good start and the approach of verifying results against results gathered from other approaches is important. One note of style is that the use of `doall` is inconsistent in the `apply-as-*` functions. i would recommend that at least one other person screen this patch as my grasp of test.check is tenuous.

Comment by Alex Miller [ 24/Oct/14 2:52 PM ]

Updated patch slightly to clean up the doall stuff.





[CLJ-1546] Widen vec to take Iterable/IReduce Created: 02/Oct/14  Updated: 24/Oct/14

Status: Open
Project: Clojure
Component/s: None
Affects Version/s: Release 1.7
Fix Version/s: Release 1.7

Type: Enhancement Priority: Critical
Reporter: Alex Miller Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Attachments: Text File clj-1546-2.patch     Text File clj-1546-3.patch     Text File clj-1546.patch    
Patch: Code and Test
Approval: Vetted

 Description   

These examples should work but does not:

Something Iterable but not IReduce:

user> (def i (eduction (map inc) (range 100)))
#'user/i
user> (instance? java.util.Collection i)
false
user> (instance? Iterable i)
true
user> (vec i)
RuntimeException Unable to convert: class clojure.core.Iteration to Object[]

Something IReduceInit but not Iterable:

user=> (vec
  (reify clojure.lang.IReduceInit
    (reduce [_ f start]
      (reduce f start (range 10)))))
RuntimeException Unable to convert: class user$reify__15 to Object[]

Proposal: Add PersistentVector.create(Iterable) and PersistentVector.create(IReduceInit) to efficiently create PVs from those. Add a number of cases to vec to support different cases.

Patch: clj-1546-3.patch

Screened by:



 Comments   
Comment by Timothy Baldridge [ 02/Oct/14 9:44 AM ]

Is there a reason the final case for (vec something) can't just be a call to (into [] coll)? It seems a bit odd to do (to-array) on anything thats not a java collection or Iterable, when we have IReduce.

Comment by Rich Hickey [ 02/Oct/14 10:02 AM ]

re: Tim - yes, this needs to support IReduce (and thereby educe) as well

Comment by Alex Miller [ 14/Oct/14 9:56 AM ]

Added new patch that handles Iterable and IReduceInit in vec. It also makes calling with a vector much faster due to the first check. into is still faster for chunked seqs (due to special InternalReduce handling of chunking).

It would be possible to move more of the variant checking into LazilyPersistentVector or PersistentVector so it could be used in more contexts. I'm not sure how much to do with that.

It would also be possible to instead lean on reduce more from the Java side if there was a Java version of reduce (as defined in mikera's branch for http://dev.clojure.org/jira/browse/CLJ-1192 at https://github.com/mikera/clojure/compare/clj-1192-vec-performance. Something like that is the only way I can see of leveraging that same InternalReduce logic that makes into faster than vec.





[CLJ-1529] Significantly improve compile time by reducing calls to Class.forName Created: 21/Sep/14  Updated: 27/Oct/14

Status: Open
Project: Clojure
Component/s: None
Affects Version/s: Release 1.7
Fix Version/s: Release 1.7

Type: Enhancement Priority: Critical
Reporter: Zach Tellman Assignee: Unassigned
Resolution: Unresolved Votes: 28
Labels: compiler, performance

Attachments: File class-for-name.diff     File clj-1529-no-cache.diff     PNG File clj-1529.png     File clj-1529-with-cache.diff     Text File maybe-class-cache-2.patch     Text File maybe-class-cache.patch    
Patch: Code
Approval: Vetted

 Description   

Compilation speed has been a real problem for a number of my projects, especially Aleph [1], which in 1.6 takes 18 seconds to load. Recently I realized that Class.forName is being called repeatedly on symbols which are lexically bound. Hits on Class.forName are cached, but misses are allowed to go through each time, which translates into tens of thousands of calls after calling `(use 'aleph.http)`.

This patch improves compilation time from 18 seconds to 7 seconds. The gain is exaggerated by the number of macros I use, but I would expect at least 50% improvements across a wide variety of codebases.

This patch does introduce a slight semantic change by privileging lexical scope over classnames. Consider this code:

(let [String "foo"]
(. String substring 0 1))

Previously, this would be treated as a static call to 'java.lang.String', but with the patch would be treated as a call to the lexical variable 'String'. Since the new semantic is what I (and I think everyone else) would have expected in the first place, it's probably very likely that no one is shadowing classes with their variable names, since someone would have complained about this. If anyone feels this is at all risky, however, I'm happy to discuss it further.

[1] https://github.com/ztellman/aleph



 Comments   
Comment by Ghadi Shayban [ 21/Sep/14 4:30 PM ]

One of our larger projects (not macro-laden) just went from 36 seconds to 23 seconds to start with this patch.

Comment by Ramsey Nasser [ 03/Oct/14 12:34 PM ]

I ported this patch to Clojure-CLR for the Unity integration project and we have seen significant speedups as well. I too agree that this is the behavior I expect as a user.

Comment by Alex Miller [ 06/Oct/14 12:19 PM ]

I ran this on a variety of open-source projects. I didn't find that it produced any unexpected behavior or test errors. Most projects were about 10% faster to run equivalent of "lein test" with a few as high as 35% faster.

Comment by Alex Miller [ 07/Oct/14 12:52 PM ]

We're interested in comparing this and the class caching in fastload branch to get something in for 1.7. Next step is to extract a patch of the stuff in fastload so we can compare them better.

Comment by Alex Miller [ 07/Oct/14 4:06 PM ]

Add maybe class cache patch from fastload branch

Comment by Alex Miller [ 08/Oct/14 8:57 AM ]

Times below to run "time lein test" on a variety of projects with columns:

  • master = current 1.7.0 master
  • maybe-cache = maybe-class-cache.patch extracted from Rich's fastload branch
  • class-for-name = class-for-name.diff from Zach
  • % maybe-cache = % improvement for maybe-cache over master
  • % class-for-name = % improvement for class-for-name patch over master (sorted desc)

project,master,maybe-cache,class-for-name,% maybe-cache,% class-for-name
aleph,25.605,16.572,14.460,35.278,43.527
riemann,40.550,27.656,24.734,31.798,39.004
lamina,37.247,30.072,29.045,19.263,22.021
trapperkeeper,11.979,11.158,10.3,6.854,14.016
plumbing,73.777,68.388,66.922,7.304,9.292
cheshire,5.583,5.089,5.086,8.848,8.902
tools.analyzer,5.411,5.289,5.023,2.255,7.171
core.async,19.161,18.090,17.942,5.589,6.362
tools.reader,4.686,4.435,4.401,5.356,6.082
clara-rules,43.964,42.140,41.542,4.149,5.509
core.typed,158.885,154.954,151.445,2.474,4.683
instaparse,9.286,8.922,8.859,3.920,4.598
schema,45.3,43.914,43.498,3.060,3.978
mandoline,76.295,74.831,74.425,1.919,2.451

The summary is that both patches improve times on all projects. In most cases, the improvement from either is <10% but the first few projects have greater improvements. The class-for-name patch has a bigger improvement in all projects than the maybe-cache patch (but maybe-cache has no change in semantics).

Comment by Nicola Mometto [ 08/Oct/14 9:03 AM ]

Are the two patches mutually exclusive?

Comment by Alex Miller [ 08/Oct/14 9:35 AM ]

They are non-over-lapping. I have not considered whether they could both be applied or whether that makes any sense.

Comment by Alex Miller [ 08/Oct/14 9:53 AM ]

The two patches both essentially cut off the same hot code path, just at different points (class-for-name is earlier), so applying them both effectively should give you about the performance of class-for-name.

Comment by Alex Miller [ 08/Oct/14 2:14 PM ]

Added a picture of the data for easier consumption.

Comment by Deepak Giridharagopal [ 10/Oct/14 4:35 PM ]

One of our bigger projects saw a reduction of startup time of 16% with class-for-name, 14% with maybe-cache, and a whopping 23% with both patches applied. This was actually starting up the program, as opposed to running "lein test", FWIW.

Maybe it's worth re-running the benchmarks with a "both-patches" variant?

Comment by Alex Miller [ 10/Oct/14 5:28 PM ]

Hey Deepak, I did actually run some of them with both patches and saw times similar to class-for-name.

Were your times consistent across restarts? The times in the data above are the best of 3 trials for every data point (although they were relatively consistent).

Comment by Deepak Giridharagopal [ 10/Oct/14 6:08 PM ]

Hi Alex, the tests I ran did 20-iteration loops, and I took the mean (though it was pretty consistent between restarts). I can redo stuff and upload the raw data for you if that will help.

Comment by Deepak Giridharagopal [ 10/Oct/14 6:43 PM ]

So repeating the experiment several times does in fact behave as you suspected...apologies for my previous LOLDATA.

Comment by Alex Miller [ 24/Oct/14 3:01 PM ]

maybe-class-cache-2.patch removes some debugging stuff

Comment by Alex Miller [ 27/Oct/14 4:41 PM ]

I've done more testing and made mods to both patches and moved them closer together.

On the maybe-class-cache patch (new version = clj-1529-with-cache.diff):
1) I found that adding a final else branch that returned null was an improvement - this avoids caching things that will never hit in the future (Cons, PersistentList, Symbols with namespaces, etc). That's both a perf improvement (avoids hashing those things) and a memory improvement.
2) The tagToClass improvement from Zach's patch is orthogonal and also valid here so I added it.
3) I added Zach's check, but moved the placement lower so that it doesn't alter semantics. It helps avoid caching locals that aren't classes.

On the class-for-name patch (new version = clj-1529-no-cache.diff):
1) Same change as #3 above - moved check lower to avoid semantics change.

With these changes, both patches have tagToClass and local checks, neither patch changes semantics, and the only difference is whether to keep or remove the cache.

aleph timings (for "lein test"):

  • 1.7 master = 25.415 s
  • 1.7 + clj-1529-with-cache.diff = 14.329 s
  • 1.7 + clj-1529-no-cache.diff = 14.808 s

lamina timings (for "lein test"):

  • 1.7 master = 37.340 s
  • 1.7 + clj-1529-with-cache.diff = 28.680 s
  • 1.7 + clj-1529-no-cache.diff = 28.759 s

The cache helps slightly in both cases, but it does not seem worth adding the new dynamic var and unbounded memory use inherent in the cache.





Generated at Thu Oct 30 13:35:58 CDT 2014 using JIRA 4.4#649-r158309.