<< Back to previous view

[CLJ-1226] set! of a deftype field using field-access syntax causes ClassCastException Created: 26/Jun/13  Updated: 11/Aug/15

Status: Open
Project: Clojure
Component/s: None
Affects Version/s: Release 1.7
Fix Version/s: Release 1.8

Type: Defect Priority: Major
Reporter: Nicola Mometto Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: compiler, deftype, interop

Attachments: Text File 0001-CLJ-1226-fix-set-of-instance-field-expression-that-r.patch     Text File 0001-CLJ-1226-fix-set-of-instance-field-expression-that-r-v2.patch    
Patch: Code and Test
Approval: Screened


set! can be used to set a public field on an instance with (set! (.field inst) val). This does not work inside a protocol function defined on a deftype with a mutable field for an instance of that type itself.

user=> (defprotocol p (f [_]))
user=> (deftype t [^:unsynchronized-mutable x] p (f [this] (set! (.x this) 1)))
user=>  (f (t. 1))   ;; expect: 1
ClassCastException user.t cannot be cast to compile__stub.user.t  user.t (NO_SOURCE_FILE:1

Cause: The type assigned in the bytecode at this point is the compile_stub type, not the expected class type.

Approach: Use getType(targetClass) instead of Type.getType(targetClass)

Patch: 0001-CLJ-1226-fix-set-of-instance-field-expression-that-r-v2.patch
Screened by: Fogus

Comment by Nicola Mometto [ 27/Jun/13 5:30 AM ]

This patch offers a better workaround for CLJ-1075, making it possible to write
(deftype foo [^:unsynchronized-mutable x] MutableX (set-x [this v] (try (set! (.x this) v)) v))

Comment by Nicola Mometto [ 25/Mar/15 4:39 PM ]

Updated patch to apply to current master

Comment by Fogus [ 07/Aug/15 3:10 PM ]

Straight-forward fix and test.

Comment by Rich Hickey [ 08/Aug/15 10:05 AM ]

Screeners - please make sure the patch has an Approach section that explains how and why the patch will fix the problem. Symptom is X and now after patch behavior is Y is not good enough.

[CLJ-1224] Records do not cache hash like normal maps Created: 24/Jun/13  Updated: 31/Jul/15

Status: Open
Project: Clojure
Component/s: None
Affects Version/s: None
Fix Version/s: Release 1.8

Type: Enhancement Priority: Critical
Reporter: Brandon Bloom Assignee: Unassigned
Resolution: Unresolved Votes: 17
Labels: defrecord, performance

Attachments: Text File 0001-cache-hasheq-and-hashCode-for-records.patch     Text File 0001-cache-hasheq-and-hashCode-for-records-v2.patch     Text File 0001-CLJ-1224-cache-hasheq-and-hashCode-for-records.patch     Text File 0001-CLJ-1224-cache-hasheq-and-hashCode-for-records-v2.patch    
Patch: Code and Test
Approval: Screened


Records do not cache their hash codes like normal Clojure maps, which affects their performance. This problem has been fixed in CLJS, but still affects JVM CLJ.

Approach: Cache hash values in record definitions, similar to maps.


coll 1.8.0-master 1.8.0-master+patch
small record 99 ns 9 ns
big record 455 ns 9 ns
(defrecord R [a b])  ;; small
(def r  (R. (range 1e3) (range 1e3)))
(bench (hash r))
(defrecord R [a b c d e f g h i j])  ;; big
(def r (map->R (zipmap [:a :b :c :d :e :f :g :h :i :j] (repeat (range 1e3)))))
(bench (hash r))

Patch: 0001-CLJ-1224-cache-hasheq-and-hashCode-for-records-v2.patch

Screened by: Alex Miller

Also see: http://dev.clojure.org/jira/browse/CLJS-281

Comment by Nicola Mometto [ 14/Feb/14 5:46 PM ]

I want to point out that my patch breaks ABI compatibility.
A possible approach to avoid this would be to have 3 constructors instead of 2, I can write the patch to support this if desired.

Comment by Stuart Halloway [ 27/Jun/14 11:09 AM ]

The patch 0001-CLJ-1224-cache-hasheq-and-hashCode-for-records.patch is broken in at least two ways:

  • The fields __hash and __hasheq are adopted by new records created by .assoc and .without, which will cause those records to have incorrect (and likely colliding) hash values
  • The addition of the new fields breaks the promise of defrecord, which includes an N+2 constructor taking meta and extmap. With the patch, defrecords get an N+4 constructor letting callers pick hash codes.

I found these problems via the following reasoning:

  • Code has been touched near __extmap
  • Grep for all uses of __extmap and see what breaks
Comment by Nicola Mometto [ 27/Jun/14 2:53 PM ]

Patch 0001-cache-hasheq-and-hashCode-for-records.patch fixes both those issues, reintroducing the N+2 arity constructor

Comment by Alex Miller [ 27/Jun/14 4:08 PM ]

Questions addressed, back to Vetted.

Comment by Andy Fingerhut [ 29/Aug/14 4:32 PM ]

All patches dated Jun 7 2014 and earlier no longer applied cleanly to latest master after some commits were made to Clojure on Aug 29, 2014. They did apply cleanly before that day.

I have not checked how easy or difficult it might be to update this patch.

Comment by Alex Miller [ 29/Aug/14 4:40 PM ]

Would be great to get this one updated as it's otherwise ready to screen.

Comment by Nicola Mometto [ 29/Aug/14 4:58 PM ]

Updated patch to apply to lastest master

Comment by Alex Miller [ 16/Jun/15 4:06 PM ]

1) hash and hasheq are stored as Objects, which seems kind of gross. It would be much better to store them as primitive longs and check whether they'd been calculated by comparing to 0. We still end up with a long -> int conversion but at least we'd avoid boxing.

2) assoc wrongly copies over the __hash and __hasheq to the new instance:

(defrecord R [a])
(def r (->R "abc"))
(hash r)                   ;; -1544829221
(hash (assoc r :a "def"))  ;; -1544829221

3) Needs some tests if they don't already exist:

  • with-meta on a record should not affect hashcode
  • modifying a record with assoc or dissoc should affect hashcode
  • maybe something else?

4) Needs some perf tests with a handful of example records (at least: 1 field, many fields, extmap, etc).

Nicola, I'm happy to let you continue to do dev on this patch with me doing the screening if you have time. Or if you don't have time, let me know and I (or someone else) can work on the dev parts. Would like to get this prepped and clean for 1.8.

Comment by Nicola Mometto [ 16/Jun/15 5:56 PM ]

Updated patch addresses the issues raised, will add some benchmarks later

Comment by Nicola Mometto [ 16/Jun/15 5:59 PM ]

Alex, regarding point 1, I stored __hash and __hasheq as ints rather than longs and compared to -1 rather than 0, to be consistent with how it's done elsewhere in the clojure impl

Comment by Alex Miller [ 17/Jun/15 11:39 AM ]

Looking at the bytecode for hashcode and hasheq, I had two questions:

1) the -1 there is a long and requires an upcast of the private field from int to long. I'm sure that's not a big deal, but wish there was a way to avoid it. I didn't try it but maybe (int -1) would help the compiler out?

2) I wonder whether something like this would perform better:

`(hasheq [this#] 
   (if (== -1 ~'__hasheq)
     (set! ~'__hasheq (int (bit-xor ~type-hash (clojure.lang.APersistentMap/mapHasheq this#)))))

The common case will be a failed compare and then the field can be loaded and returned directly without any casting.

Comment by Nicola Mometto [ 17/Jun/15 11:54 AM ]

1- there's no Numbers.equiv(int, int) so even casting -1 to an int wouldn't solve this. a cast is always necessary. if we were to make hasheq a long, we'd need l2i in the return path, making hasheq an int we need an i2l in the comparison.
2- that doesn't remove any casting, it just replaces a load from the local variable stack with a field load:

;; current version
0: ldc2_w        #203                // long -1l
3: aload_0
4: getfield      #236                // Field __hasheq:I
7: i2l
8: lcmp
9: ifne          38
12: ldc2_w        #267                // long 1340417398l
15: aload_0
16: checkcast     #16                 // class clojure/lang/IPersistentMap
19: invokestatic  #274                // Method clojure/lang/APersistentMap.mapHasheq:(Lclojure/lang/IPersistentMap;)I
22: i2l
23: lxor
24: invokestatic  #278                // Method clojure/lang/RT.intCast:(J)I
27: istore_1
28: aload_0
29: iload_1
30: putfield      #236                // Field __hasheq:I
33: iload_1
34: goto          42
37: pop
38: aload_0
39: getfield      #236                // Field __hasheq:I
42: ireturn
;; your version
0: ldc2_w        #203                // long -1l
3: aload_0
4: getfield      #236                // Field __hasheq:I
7: i2l
8: lcmp
9: ifne          35
12: aload_0
13: ldc2_w        #267                // long 1340417398l
16: aload_0
17: checkcast     #16                 // class clojure/lang/IPersistentMap
20: invokestatic  #274                // Method clojure/lang/APersistentMap.mapHasheq:(Lclojure/lang/IPersistentMap;)I
23: i2l
24: lxor
25: invokestatic  #278                // Method clojure/lang/RT.intCast:(J)I
28: putfield      #236                // Field __hasheq:I
31: goto          37
34: pop
35: aconst_null
36: pop
37: aload_0
38: getfield      #236                // Field __hasheq:I
41: ireturn
Comment by Alex Miller [ 17/Jun/15 12:01 PM ]

Fair enough! Looks pretty good to me, still needs the perf numbers.

Comment by Nicola Mometto [ 17/Jun/15 1:00 PM ]
coll 1.7.0-RC2 1.7.0-RC2+patch
big record ~940ns ~10ns
small record ~150ns ~11ns
;; big record, 1.7.0-RC2
user=> (use 'criterium.core)
user=> (defrecord R [a b c d e f g h i j])
user=> (def r (map->R (zipmap [:a :b :c :d :e :f :g :h :i :j] (repeat (range 1e3)))))
user=> (bench (hash r))
WARNING: Final GC required 1.291182176566658 % of runtime
Evaluation count : 63385020 in 60 samples of 1056417 calls.
             Execution time mean : 943.320293 ns
    Execution time std-deviation : 44.001842 ns
   Execution time lower quantile : 891.919381 ns ( 2.5%)
   Execution time upper quantile : 1.033894 ┬Ás (97.5%)
                   Overhead used : 1.980453 ns

;; big record, 1.7.0-RC2 + patch
user=> (defrecord R [a b c d e f g h i j])
user=> (def r (map->R (zipmap [:a :b :c :d :e :f :g :h :i :j] (repeat (range 1e3)))))
user=> (bench (hash r))
WARNING: Final GC required 1.0097162582088168 % of runtime
Evaluation count : 4820968380 in 60 samples of 80349473 calls.
             Execution time mean : 10.657581 ns
    Execution time std-deviation : 0.668011 ns
   Execution time lower quantile : 9.975656 ns ( 2.5%)
   Execution time upper quantile : 12.190471 ns (97.5%)
                   Overhead used : 2.235715 ns

;; small record 1.7.0-RC2
user=> (defrecord R [a b])
user=> (def r  (R. (range 1e3) (range 1e3)))
user=> (bench (hash r))
WARNING: Final GC required 1.456092401467115 % of runtime
Evaluation count : 423779160 in 60 samples of 7062986 calls.
             Execution time mean : 147.154359 ns
    Execution time std-deviation : 8.148340 ns
   Execution time lower quantile : 138.052054 ns ( 2.5%)
   Execution time upper quantile : 165.573489 ns (97.5%)
                   Overhead used : 1.629944 ns

;; small record 1.7.0-RC2+patch
user=> (defrecord R [a b])
user=> (def r  (R. (range 1e3) (range 1e3)))
user=>  (bench (hash r))
WARNING: Final GC required 1.720638384341818 % of runtime
Evaluation count : 4483195020 in 60 samples of 74719917 calls.
             Execution time mean : 11.696574 ns
    Execution time std-deviation : 0.506482 ns
   Execution time lower quantile : 10.982760 ns ( 2.5%)
   Execution time upper quantile : 12.836103 ns (97.5%)
                   Overhead used : 2.123801 ns
Comment by Alex Miller [ 17/Jun/15 3:36 PM ]

Screened for 1.8.

Comment by Alex Miller [ 22/Jun/15 8:38 AM ]

Note that using -1 for the uncomputed hash value can cause issues with transient lazily computed hash codes on serialization (CLJ-1766). In this case, the defrecord cached code is not transient so I don't think it's a problem, but something to be aware of. Using 0 would avoid this potential issue.

Comment by Rich Hickey [ 17/Jul/15 12:00 PM ]

So, why not use 0? You won't need initialization then either

Comment by Nicola Mometto [ 17/Jul/15 7:50 PM ]

Updated patch so that it applies on current master and changed the default hash value from -1 to 0.

Rich, we still need initialization since all the record ctors delegate to the ctor arity with explicit __hash and __hasheq, following the approach of the alt ctors for __extmap and __meta

Comment by Alex Miller [ 17/Jul/15 10:30 PM ]

Moving back to vetted for screening

Comment by Alex Miller [ 28/Jul/15 6:17 PM ]

Hey Nicola, two comments on the hasheq/hashcode impl:

1) I don't think there's any reason to use == in the check instead of =, and = seems better (I think the resulting bytecode is same either way though).

2) The generated bytecode in these cases will call getfield twice in the cached case (once for the check, and once for the return):

public int hasheq();
       0: lconst_0      
       1: aload_0       
       2: getfield      #232                // Field __hasheq:I     ;; <-- HERE
       5: i2l           
       6: lcmp          
       7: ifne          36
      10: ldc2_w        #263                // long -989260517l
      13: aload_0       
      14: checkcast     #16                 // class clojure/lang/IPersistentMap
      17: invokestatic  #270                // Method clojure/lang/APersistentMap.mapHasheq:(Lclojure/lang/IPersistentMap;)I
      20: i2l           
      21: lxor          
      22: invokestatic  #274                // Method clojure/lang/RT.intCast:(J)I
      25: istore_1      
      26: aload_0       
      27: iload_1       
      28: putfield      #232                // Field __hasheq:I
      31: iload_1       
      32: goto          40
      35: pop           
      36: aload_0       
      37: getfield      #232                // Field __hasheq:I   ;; <-- HERE
      40: ireturn

Letting a local will avoid that:

`(hasheq [this#] (let [hv# ~'__hasheq]     ;; ADDED
                                  (if (= 0 hv#)           ;; USED
                                    (let [h# (int (bit-xor ~type-hash (clojure.lang.APersistentMap/mapHasheq this#)))]
                                      (set! ~'__hasheq h#)
                                    hv#)))                ;; USED

Output bytecode:

public int hasheq();
       0: aload_0       
       1: getfield      #227                // Field __hasheq:I
       4: istore_1      
       5: lconst_0      
       6: iload_1       
       7: i2l           
       8: lcmp          
       9: ifne          38
      12: ldc2_w        #258                // long -989260517l
      15: aload_0       
      16: checkcast     #16                 // class clojure/lang/IPersistentMap
      19: invokestatic  #265                // Method clojure/lang/APersistentMap.mapHasheq:(Lclojure/lang/IPersistentMap;)I
      22: i2l           
      23: lxor          
      24: invokestatic  #269                // Method clojure/lang/RT.intCast:(J)I
      27: istore_2      
      28: aload_0       
      29: iload_2       
      30: putfield      #227                // Field __hasheq:I
      33: iload_2       
      34: goto          39
      37: pop           
      38: iload_1       
      39: ireturn

For me, this was about 2% faster in bench too.

Comment by Alex Miller [ 28/Jul/15 6:18 PM ]

Equivalent change in hashCode too.

Comment by Nicola Mometto [ 29/Jul/15 4:06 AM ]

Updated patch takes into account Alex's last notes

[ASYNC-124] put! into channel that has mapcat transducer fails to dispatch more than one waiting takes Created: 30/May/15  Updated: 28/Aug/15

Status: Open
Project: core.async
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Critical
Reporter: Jozef Wagner Assignee: Alex Miller
Resolution: Unresolved Votes: 1
Labels: None

Clojure 1.7.0-RC1, latest core.async head (May 2015)

Attachments: Text File async-124-2.patch     Text File async-124-3.patch     Text File async-124-4.patch     Text File async-124-5.patch     Text File async-124-6.patch     Text File async-124-7.patch     Text File async-124-8.patch     Text File async-124.patch    
Patch: Code and Test
Approval: Screened


This issue is similar to one that has caused the revert of ASYNC-103. When channel has multiple pending takes, one put will dispatch only one take, even if transducer attached to the channel produces multiple values.

(require '[clojure.core.async :refer (chan put! take!)])
(let [xf (mapcat #(range % (+ % 5)))
        c (chan 10 xf)]
    (dotimes [x 5]
      (take! c #(println (str x %))))
    (put! c 10)
    (put! c 15))

The output of the above code is

0 10
1 11

but it should be

0 10
1 11
2 12
3 13
4 14

Note that if we change the order and execute puts before takes, the result will be as expected.

Cause: When executing a write on a channel, no more than one pending take will be satisfied. Other pending takes will wait until the next put. When a put could release at most one take, this was sufficient but with expanding transducers, a put can potentially satisfy many pending takers.

Approach: Once a put succeeds into the buffer (through the transducer), instead of finding a single taker, do the prior work in a loop, pairing up callbacks with values removed from the buffer. Once the loop is complete, release the mutex and dispatch all paired takes.

Patch: async-124-8.patch

Screened by:

Comment by Jozef Wagner [ 30/May/15 10:30 AM ]

patch with test

Comment by Jozef Wagner [ 30/May/15 10:32 AM ]

Note that with attached patch the actual dispatch/run on takes happens before (.unlock mutex) and before (when done? (abort this)) are called.

Also note that added test only works with 1.7.0 version of clojure

Comment by Jozef Wagner [ 30/May/15 12:46 PM ]

Added new patch async-124-2.patch that dispatches after unlocking and aborting. Test was also changed as it had race condition.

Comment by Alex Miller [ 06/Jul/15 9:23 AM ]

We can't add a test that is 1.7-only. See https://github.com/clojure/core.async/blob/master/src/test/clojure/clojure/core/pipeline_test.clj#L7 for an example of how to patch in a transducer impl that will work in pre-1.7.

Comment by Jozef Wagner [ 06/Jul/15 10:41 AM ]

Added async-124-3.patch that works with Clojure <1.7

Comment by Alex Miller [ 10/Aug/15 3:53 PM ]

Jozef, I don't see any reason why the independent callbacks in your test will necessarily conj in order as they contend for the atom. When I run the test I usually get a failure. If you agree, could wrap expected and actual in a set. Or if that seems wrong, then let's decide why.

Comment by Jozef Wagner [ 11/Aug/15 10:49 AM ]

Attached async-124-4.patch that uses unordered comparison in test

Comment by Alex Miller [ 14/Aug/15 1:01 PM ]

Added async-124-5.patch which should be semantically identical as -4 but retains the structure of a single dispatch point.

Comment by Alex Miller [ 14/Aug/15 3:03 PM ]

New -6 patch tries to retain more of the original algorithm and leverages the original loop instead of adding a new one.

Comment by Alex Miller [ 18/Aug/15 8:25 AM ]

New -7 patch addresses some bugs in the prior version and cleans up the loop code, also adds better tests.

Comment by Alex Miller [ 28/Aug/15 11:53 AM ]

test tweaks added in -8, per Stuart Sierra

Comment by Stuart Sierra [ 28/Aug/15 12:01 PM ]

Screened. Clojure version good. Still needs corresponding fix to ClojureScript version.

[ASYNC-103] promise-chan Created: 05/Nov/14  Updated: 28/Sep/15

Status: Reopened
Project: core.async
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Stuart Halloway Assignee: Alex Miller
Resolution: Unresolved Votes: 3
Labels: None

Attachments: Text File async-103-2.patch     Text File async-103-3.patch     Text File async-103-4.patch     Text File async-103-5.patch     Text File async-103-6.patch     Text File async-103-cljs.patch     Text File async-103.patch    
Patch: Code and Test
Approval: Screened
Waiting On: Alex Miller


A promise-chan is a new kind of core.async channel that gives promise semantics to a channel.

  • Takes block prior to put or close!
  • After a put, all pending takes are notified, and any subsequent takes return the put value immediately
  • After a close (and no put), all pending takes are notified with nil, and any subsequent takes return nil immediately

In practice, a promise-chan is a normal channel that uses a promise-buf:

  • buf can only take on one value (or nil if closed) ever
  • buf is never emptied: N consumers can all get the same, one and only value by reading from the channel

In order to make close! sensible with this new kind of buf

  • all buffers need to be closeable
  • closing a channel closes its buffer
  • all basic buffer types do nothing when closed
  • a promise-buf make a one-time transition to having no value when closed (if not previously delivered)
  • closing a buffer is implementation detail not exposed to end users


  • Buffer protocol now has a close-buf! function (close! would have overlapped the Channel function close!). close-buf! is invoked on a buffer when the channel is closed, allowing it to update itself if necessary.
  • Existing buffers implement close-buf! and do nothing (buffer still available for taking)
  • New promise-buffer implementation. Makes a one-time transition when value is supplied or buffer is closed. value is held in an unsynchronized-mutable deftype field - updates via add!* or close-buf! always happen under the containing channel mutex.
  • New api functions: promise-chan creates a channel with a promise-buffer and promise-buffer.

Patch: async-103-6.patch, async-103-cljs.patch (catches up cljs to diffs in async-103-6)

Comment by Ghadi Shayban [ 07/Nov/14 4:06 PM ]

My initial gut reaction was that this is more related to semantics of the channel, not the buffer, and I'm wary of significant changes to esp. the impl protocols. But after seeing an impl, it looks straightforward and the changes aren't too significant. (Another possibility is to make another simpler implementation of a channel, with just slots for the value, lock and pending takers before the value is delivered. No slots for xfn or putters or buffer handling would be needed.)

Note an atom is not needed in the PromiseBuffer, just a set! on a mutable field to be inline with the other buffer flavors. If the patch continues using an atom, maybe guard val to not swap unnecessarily.

Comment by Alex Miller [ 07/Nov/14 11:09 PM ]

Good call on the atom - backed off to just using existing clojure.lang.Box. If that's too weird, will just deftype it.

Comment by Alex Miller [ 07/Nov/14 11:18 PM ]

Dur, just need the val field itself.

Comment by Fogus [ 09/Jan/15 4:27 PM ]

I'd really love to see the reason for the current impl over a more pointed promise channel (perhaps as described by Ghadi). This is a clear implementation, but with the addition of close-buf! some complexity is added for implementations in the future. Specifically, I'd like to at least see a dcostring in the protocol describing the meaning and desired semantics around close-buf! and what it should return and when. Like I said, this is a quality patch, but I'm just concerned that there are unstated assumptions around close-buf! that escape me.

Comment by Alex Miller [ 28/Jan/15 12:43 PM ]

Fogus, I looked into implementing the promise-chan a bit today but it requires replicating a significant portion of the existing chan implementation. I believe that's the primary reason to do it this way, just leveraging the majority chan code.

New -5 patch has two changes - I commented the Buffer protocol fns, and I removed promise-buffer from the api, as I think users should only use promise-chan.

Comment by Fogus [ 30/Jan/15 10:06 AM ]

The patch looks good. My only (minor) reservation is that the Buffer docstrings are written under the assumption that people will use instances only in the context of a guarded channel. I understand why of course, so I think I might be too pedantic here. Because of this I see no reason not to +1 this patch.

Comment by Alex Miller [ 23/Feb/15 8:32 AM ]

Applied patch.

Comment by Alex Miller [ 03/Apr/15 3:57 PM ]

Reopening (and likely reverting commit).

This example demonstrates that only one of the waiting blocks will be notified, instead of all, as we'd like:

(let [c (promise-chan)] 
  ;; takers
  (dotimes [_ 3] 
    (go (println (<! c)))) 

  ;; put, should get to all takers
  (go (>! c 1)))
Comment by Jozef Wagner [ 30/May/15 10:35 AM ]

ASYNC-124 fixes the issue mentioned by Alex

Comment by Leon Grapenthin [ 11/Jun/15 11:44 AM ]

Quote: "buf is never emptied: N consumers can all get the same, one and only value by reading from the channel"

This is not true with the new closing semantics introduced here, correct? If I deref a Clojure promise, I get the same value everytime. But what is the purpose in closing promise-chans after the promise has been delivered?

When is a good time to close a promise-chan? If I close after the put, which is what many core.async processes do, takers must have taken before the close which means it will often be a race condition unless explicit care is taken.

Is there a benefit or am I simply missing something here?

Comment by Herwig Hochleitner [ 08/Jul/15 2:23 PM ]

Here is my cljs implementation of a promise channel: https://github.com/webnf/webnf/blob/master/cljs/src/webnf/promise.cljs
I'm the sole author and you are free to use any of it.

Comment by Alex Miller [ 12/Aug/15 3:00 PM ]

I believe that this patch can be re-evaluated after ASYNC-124 is applied as that patch addresses the reason this was retracted.

Comment by Leon Grapenthin [ 23/Aug/15 6:46 AM ]

It seems that there are two different approaches to how a promise-chan is supposed to or can be used.

1. All consumers are supposed to be takers before the promise is delivered. Taking after delivery is not intended, hence producers should close the channel after delivery.
2. " can take at any time and either have to wait for the promise (take blocks) or they get its value right away.

(2) models the behavior of real promises. (1) rather models that of a mutable reference.

It seems that the current patch explicitly makes room for (1) by giving producers the possibility to close.

Now if you want to use a promise-chan like (1) as a producer you can't create and return a promise-chan. Instead you need to be passed a promise-chan which has (a) taker(s) already. On the consumer side, it seems like an annoying quirk to enqueue takes before being able to pass the promise-chan to the producer (since he could close the promise immediately). I can't think of a scenario where one would want to do such thing. So my new question is:

Given the lengths taken here to enable (1), are there practical usecases for that? Why is closing a promise-chan not just going to be a no op?

Comment by Alex Miller [ 24/Aug/15 12:56 PM ]

Taking after delivery is intended. It is hard sometimes to know whether a take will happen before or after delivery in asynchronous systems so both must work. close! will deliver nil to all pending or future takers - in this case (as with all channels) nil indicates a drained channel, meaning that no delivery will ever happen.

However, one good question here is - what happens when you close a promise-chan that has already been delivered?


  • (def pc (promise-chan))
  • take from promise-chan (blocks)
  • (>!! pc 5) - delivers 5 to blocked taker(s)
  • take from promise-chan (immediately returns 5) ;; for any number of takes
  • (close! pc 5)
  • take from promise-chan (immediately returns nil) ;; ???

I could easily be persuaded that the last answer is wrong and that it should always deliver 5 regardless of whether the pc gets closed (that is closing has no effect on delivered promise-chans).

Comment by Alex Miller [ 24/Aug/15 1:03 PM ]

I just confirmed with Rich that once delivered, a promise-chan should always deliver that value to takers, regardless of whether it is subsequently closed.

Comment by Stuart Sierra [ 28/Aug/15 4:02 PM ]

Screened. Clojure version good, assuming ASYNC-124 is applied first. ClojureScript version may or may not need additional work.

I agree that the addition of close-buf! and the no-op implementations feel like working around some missing abstraction. (What does it even mean to "close" a buffer?) But the Buffer protocol is not part of the public API, it's buried down in clojure.core.async.impl.protocols. The Channel and Buffer implementations are already tightly coupled, so adding another point of collaboration does not seem likely to cause problems. Adding close-buf! is certainly preferable to duplicating the implementation of ManyToManyChannel.

Generated at Sun Oct 04 00:08:23 CDT 2015 using JIRA 4.4#649-r158309.