<< Back to previous view

[CLJ-1546] Widen vec to take Iterable/IReduce Created: 02/Oct/14  Updated: 14/Oct/14

Status: Open
Project: Clojure
Component/s: None
Affects Version/s: Release 1.7
Fix Version/s: Release 1.7

Type: Enhancement Priority: Critical
Reporter: Alex Miller Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Attachments: Text File clj-1546-2.patch     Text File clj-1546.patch    
Patch: Code
Approval: Incomplete

 Description   

These examples should work but does not:

Something Iterable but not IReduce:

user> (def i (eduction (map inc) (range 100)))
#'user/i
user> (instance? java.util.Collection i)
false
user> (instance? Iterable i)
true
user> (vec i)
RuntimeException Unable to convert: class clojure.core.Iteration to Object[]

Something IReduceInit but not Iterable:

user=> (vec
  (reify clojure.lang.IReduceInit
    (reduce [_ f start]
      (reduce f start (range 10)))))
RuntimeException Unable to convert: class user$reify__15 to Object[]

Proposal: Add PersistentVector.create(Iterable) to efficiently create PV from Iterable. Add a case to vec to support "Iterable but not Collection" to call it.

Patch: clj-1546-2.patch

Screened by:



 Comments   
Comment by Timothy Baldridge [ 02/Oct/14 9:44 AM ]

Is there a reason the final case for (vec something) can't just be a call to (into [] coll)? It seems a bit odd to do (to-array) on anything thats not a java collection or Iterable, when we have IReduce.

Comment by Rich Hickey [ 02/Oct/14 10:02 AM ]

re: Tim - yes, this needs to support IReduce (and thereby educe) as well

Comment by Alex Miller [ 14/Oct/14 9:56 AM ]

Added new patch that handles Iterable and IReduceInit in vec. It also makes calling with a vector much faster due to the first check. into is still faster for chunked seqs (due to special InternalReduce handling of chunking).

It would be possible to move more of the variant checking into LazilyPersistentVector or PersistentVector so it could be used in more contexts. I'm not sure how much to do with that.

It would also be possible to instead lean on reduce more from the Java side if there was a Java version of reduce (as defined in mikera's branch for http://dev.clojure.org/jira/browse/CLJ-1192 at https://github.com/mikera/clojure/compare/clj-1192-vec-performance. Something like that is the only way I can see of leveraging that same InternalReduce logic that makes into faster than vec.





[CLJ-1424] Feature Expressions Created: 15/May/14  Updated: 13/Oct/14

Status: Open
Project: Clojure
Component/s: None
Affects Version/s: None
Fix Version/s: Release 1.7

Type: Enhancement Priority: Major
Reporter: Ghadi Shayban Assignee: Alex Miller
Resolution: Unresolved Votes: 0
Labels: reader

Attachments: File CLJ-1424-2.diff     File clojure-feature-expressions.diff    
Approval: Incomplete

 Description   

Feature expressions based directly on Common Lisp. See Clojure design docs, which includes discussion and links to Common Lisp documentation for feature expressions here: http://dev.clojure.org/display/design/Feature+Expressions

#+ #- and or not
are supported. Unreadable tagged literals are suppressed through the *suppress-read* dynamic var. For example, with *features* being #{:clj}, which is the default, the following should read :foo

#+cljs #js {:one :two} :foo

The initial *features* set can be augmented (clj will always be included) with the clojure.features System property:

-Dclojure.features=production,embedded

Patch: CLJ-1424-2.diff

Questions: Should *suppress-read* override *read-eval*?

Related: CLJS-27, TRDR-14



 Comments   
Comment by Jozef Wagner [ 16/May/14 2:19 AM ]

Has there been a decision that CL syntax is going to be used? Related discussion can be found at design page, google groups discussion and another discussion.

Comment by Alex Miller [ 16/May/14 8:34 AM ]

No, no decisions on anything yet.

Comment by Ghadi Shayban [ 19/May/14 7:25 PM ]

Just to echo a comment from TRDR-14:

This is WIP and just one approach for feature expressions. There seem to be at least two couple diverging approaches emerging from the various discussion (Brandon Bloom's idea of read-time splicing being the other.)

In any case having all Clojure platforms be ready for the change is probably essential. Also backwards compatibility of feature expr code to Clojure 1.6 and below is also not trivial.

Comment by Kevin Downey [ 04/Aug/14 1:39 PM ]

if you have ever tried to do tooling for a language where the "parser" tossed out information or did some partial evaluation, it is a pain. this is basically what the #+cljs style feature expressions and bbloom's read time splicing both do with clojure's reader. I think resolving this at read time instead of having the compiler do it before macro expansion is a huge mistake and makes the reader much less useful for reading code.

Comment by Ghadi Shayban [ 04/Aug/14 2:00 PM ]

Kevin, what kind of tooling use case are you alluding to?

Comment by Kevin Downey [ 04/Aug/14 3:24 PM ]

any use case that involves reading code and not immediately handing it off to the compiler. if I wanted to write a little snippet to read in a function, add an unused argument to every arity then pprint it back, reader resolved feature expressions would not round trip.

if I want to write snippet of code to generate all the methods for a deftype (not a macro, just at the repl write a `for` expression) I can generate a clojure data structure, call pprint on it, then paste it in as code, reader feature expressions don't have a representation as data so I cannot do that, I would have to generate strings directly.

Comment by Alex Miller [ 22/Aug/14 9:10 AM ]

Changing Patch setting so this is not in Screenable yet (as it's still a wip).





[CLJ-1192] vec function is substantially slower than into function Created: 06/Apr/13  Updated: 13/Oct/14

Status: Open
Project: Clojure
Component/s: None
Affects Version/s: Release 1.5
Fix Version/s: Release 1.7

Type: Enhancement Priority: Major
Reporter: Luke VanderHart Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: performance

Approval: Incomplete

 Description   

(vec coll) and (into [] coll) do exactly the same thing. However, due to into using transients, it is substantially faster. On my machine:

(time (dotimes [_ 100] (vec (range 100000))))
"Elapsed time: 732.56 msecs"

(time (dotimes [_ 100] (into [] (range 100000))))
"Elapsed time: 491.411 msecs"

This is consistently repeatable.

Since vec's sole purpose is to transform collections into vectors, it should do so at the maximum speed available.



 Comments   
Comment by Andy Fingerhut [ 07/Apr/13 5:50 PM ]

I am pretty sure that Clojure 1.5.1 also uses transient vectors for (vec (range n)) (probably also some earlier versions of Clojure, too).

Look at vec in core.clj. It checks whether its arg is a java.util.Collection, which lazy seqs are, so calls (clojure.lang.LazilyPersistentVector/create coll).

LazilyPersistentVector's create method checks whether its argument is an ISeq, which lazy seqs are, so it calls PersistentVector.create(RT.seq(coll)).

All 3 of PersistentVector's create() methods use transient vectors to build up the result.

I suspect the difference in run times are not because of transients or not, but because of the way into uses reduce, and perhaps may also have something to do with the perhaps-unnecessary call to RT.seq in LazilyPersistentVector's create method (in this case, at least – it is likely needed for other types of arguments).

Comment by Alan Malloy [ 14/Jun/13 2:17 PM ]

I'm pretty sure the difference is that into uses reduce: since reducers were added in 1.5, chunked sequences know how to reduce themselves without creating unnecessary cons cells. PersistentVector/create doesn't use reduce, so it has to allocate a cons cell for each item in the sequence.

Comment by Gary Fredericks [ 08/Sep/13 1:55 PM ]

Is there any downside to (defn vec [coll] (into [] coll)) (or the inlined equivalent)?

Comment by Ghadi Shayban [ 11/Apr/14 5:13 PM ]

While I agree that there are improvements and possibly low-hanging fruit, FWIW https://github.com/clojure/tools.analyzer/commit/cf7dda81a22f4c9c1fe64c699ca17e7deed61db4#commitcomment-5989545

showed a 5% slowdown from a few callsites in tools.analyzer.

This ticket's benchmark is incomplete in that it covers a single type of argument (chunked range), and flawed as it timing the expense of realizing the range. (That could be a legit benchmark case, but it shouldn't be the only one).

Sorry to rain on a parade. I promise like speed too!

Comment by Greg Chapman [ 25/Apr/14 5:23 PM ]

One thing to note is that vec has a subtle difference from into when the collection is an Object array of length <= 32. In that case, vec aliases the supplied array, rather than copying it (this is noted in the warning here: http://clojuredocs.org/clojure_core/clojure.core/vec). I believe I read some place that this behavior is intentional, but I can't find the citation.

Comment by Andy Fingerhut [ 25/Apr/14 10:18 PM ]

Greg, CLJ-893 might be what you remember. That is the ticket that was closed by a patch updating the documentation of vec.

Comment by Mike Anderson [ 18/May/14 7:41 AM ]

I think there are quite a few performance improvements that can be made to vec in general. For example, if given a List it should use PersistentVector.create(List) rather than producing an unnecessary seq, which appears to be the case at the moment. Also it should probably return the same object if passed an existing IPersistentVector.

Basically there are a number of cases that we could be handling more efficiently....

I'm taking a look at this now.... will propose a quick patch if it seems there is a good solution.

Comment by Mike Anderson [ 24/Jul/14 4:01 AM ]

I've looked at this issue and it is quite complex. There are multiple types that need to potentially be converted into vectors, and doing so efficiently will often require making use of reduce-style operations on the source collections.

Doing this efficiently will probably in turn require making use of the IReduce interface, which doesn't yet seem to be fully utilised across the Clojure code based. If we do this, lots of operations (not just vec!) can be made faster but it will be quite a major change.

I have a branch that implements some of this but would appreciate feedback if this is the right direction before I take it any further:
https://github.com/mikera/clojure/tree/clj-1192-vec-performance

Comment by Alex Miller [ 24/Jul/14 9:45 AM ]

Thanks Mike! It may take a few days before I can get back to you about this.

Comment by Mike Anderson [ 25/Jul/14 3:44 AM ]

Basically the approach I am proposing is:

  • Make various collections implement IReduce efficiently (if they don't already). Especially applied to chunked seqs etc.
  • Have RT.reduce(...) methods that implement reduce on the Java side
  • Make the Clojure side use IReduce where relevant (should be as simple as extending the existing protocols)
  • Implement vec (and other similar operations) in terms of IReduce - which will solve this specific issue

If we really care about pushing vector performance even further, we can also consider:

  • Create specialised small vector types where appropriate - e.g. a specialised SmallPersistentVector class for <32 elements. This should outperform the more generic PersistentVector which is better suited for large vectors.
  • Some dedicated construction functions that know how to efficiently exploit knowledge about the data source (e.g. creating a vec from a segment of a big Object array can be done with a bunch of arraycopys into 32-element chunks and then constructing a PersistentVector around these)

This should give us a decent speedup overall (of course it would need benchmarking... but I'd hope to see some sort of measurable improvement on a macro benchmark like building and testing Clojure).





[CLJ-1529] Significantly improve compile time by reducing calls to Class.forName Created: 21/Sep/14  Updated: 10/Oct/14

Status: Open
Project: Clojure
Component/s: None
Affects Version/s: Release 1.7
Fix Version/s: Release 1.7

Type: Enhancement Priority: Critical
Reporter: Zach Tellman Assignee: Unassigned
Resolution: Unresolved Votes: 27
Labels: compiler, performance

Attachments: File class-for-name.diff     PNG File clj-1529.png     Text File maybe-class-cache.patch    
Patch: Code
Approval: Incomplete

 Description   

Compilation speed has been a real problem for a number of my projects, especially Aleph [1], which in 1.6 takes 18 seconds to load. Recently I realized that Class.forName is being called repeatedly on symbols which are lexically bound. Hits on Class.forName are cached, but misses are allowed to go through each time, which translates into tens of thousands of calls after calling `(use 'aleph.http)`.

This patch improves compilation time from 18 seconds to 7 seconds. The gain is exaggerated by the number of macros I use, but I would expect at least 50% improvements across a wide variety of codebases.

This patch does introduce a slight semantic change by privileging lexical scope over classnames. Consider this code:

(let [String "foo"]
(. String substring 0 1))

Previously, this would be treated as a static call to 'java.lang.String', but with the patch would be treated as a call to the lexical variable 'String'. Since the new semantic is what I (and I think everyone else) would have expected in the first place, it's probably very likely that no one is shadowing classes with their variable names, since someone would have complained about this. If anyone feels this is at all risky, however, I'm happy to discuss it further.

[1] https://github.com/ztellman/aleph



 Comments   
Comment by Ghadi Shayban [ 21/Sep/14 4:30 PM ]

One of our larger projects (not macro-laden) just went from 36 seconds to 23 seconds to start with this patch.

Comment by Ramsey Nasser [ 03/Oct/14 12:34 PM ]

I ported this patch to Clojure-CLR for the Unity integration project and we have seen significant speedups as well. I too agree that this is the behavior I expect as a user.

Comment by Alex Miller [ 06/Oct/14 12:19 PM ]

I ran this on a variety of open-source projects. I didn't find that it produced any unexpected behavior or test errors. Most projects were about 10% faster to run equivalent of "lein test" with a few as high as 35% faster.

Comment by Alex Miller [ 07/Oct/14 12:52 PM ]

We're interested in comparing this and the class caching in fastload branch to get something in for 1.7. Next step is to extract a patch of the stuff in fastload so we can compare them better.

Comment by Alex Miller [ 07/Oct/14 4:06 PM ]

Add maybe class cache patch from fastload branch

Comment by Alex Miller [ 08/Oct/14 8:57 AM ]

Times below to run "time lein test" on a variety of projects with columns:

  • master = current 1.7.0 master
  • maybe-cache = maybe-class-cache.patch extracted from Rich's fastload branch
  • class-for-name = class-for-name.diff from Zach
  • % maybe-cache = % improvement for maybe-cache over master
  • % class-for-name = % improvement for class-for-name patch over master (sorted desc)

project,master,maybe-cache,class-for-name,% maybe-cache,% class-for-name
aleph,25.605,16.572,14.460,35.278,43.527
riemann,40.550,27.656,24.734,31.798,39.004
lamina,37.247,30.072,29.045,19.263,22.021
trapperkeeper,11.979,11.158,10.3,6.854,14.016
plumbing,73.777,68.388,66.922,7.304,9.292
cheshire,5.583,5.089,5.086,8.848,8.902
tools.analyzer,5.411,5.289,5.023,2.255,7.171
core.async,19.161,18.090,17.942,5.589,6.362
tools.reader,4.686,4.435,4.401,5.356,6.082
clara-rules,43.964,42.140,41.542,4.149,5.509
core.typed,158.885,154.954,151.445,2.474,4.683
instaparse,9.286,8.922,8.859,3.920,4.598
schema,45.3,43.914,43.498,3.060,3.978
mandoline,76.295,74.831,74.425,1.919,2.451

The summary is that both patches improve times on all projects. In most cases, the improvement from either is <10% but the first few projects have greater improvements. The class-for-name patch has a bigger improvement in all projects than the maybe-cache patch (but maybe-cache has no change in semantics).

Comment by Nicola Mometto [ 08/Oct/14 9:03 AM ]

Are the two patches mutually exclusive?

Comment by Alex Miller [ 08/Oct/14 9:35 AM ]

They are non-over-lapping. I have not considered whether they could both be applied or whether that makes any sense.

Comment by Alex Miller [ 08/Oct/14 9:53 AM ]

The two patches both essentially cut off the same hot code path, just at different points (class-for-name is earlier), so applying them both effectively should give you about the performance of class-for-name.

Comment by Alex Miller [ 08/Oct/14 2:14 PM ]

Added a picture of the data for easier consumption.

Comment by Deepak Giridharagopal [ 10/Oct/14 4:35 PM ]

One of our bigger projects saw a reduction of startup time of 16% with class-for-name, 14% with maybe-cache, and a whopping 23% with both patches applied. This was actually starting up the program, as opposed to running "lein test", FWIW.

Maybe it's worth re-running the benchmarks with a "both-patches" variant?

Comment by Alex Miller [ 10/Oct/14 5:28 PM ]

Hey Deepak, I did actually run some of them with both patches and saw times similar to class-for-name.

Were your times consistent across restarts? The times in the data above are the best of 3 trials for every data point (although they were relatively consistent).

Comment by Deepak Giridharagopal [ 10/Oct/14 6:08 PM ]

Hi Alex, the tests I ran did 20-iteration loops, and I took the mean (though it was pretty consistent between restarts). I can redo stuff and upload the raw data for you if that will help.

Comment by Deepak Giridharagopal [ 10/Oct/14 6:43 PM ]

So repeating the experiment several times does in fact behave as you suspected...apologies for my previous LOLDATA.





Generated at Wed Oct 22 10:52:48 CDT 2014 using JIRA 4.4#649-r158309.