<< Back to previous view

[ASYNC-39] Processes spawned by mix never terminate Created: 21/Nov/13  Updated: 22/Apr/14

Status: Open
Project: core.async
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Minor
Reporter: Leon Grapenthin Assignee: Rich Hickey
Resolution: Unresolved Votes: 0
Labels: chan, mix
Environment:

"0.1.256.0-1bf8cf-alpha"



 Description   

Once a mix has been created, the go-loop inside mix will always recur. Obviously, input-channels can be unmixed and the output-channel could be closed, but the process would still never terminate.

Probably mixes should support something like (stop) to to make the mix-associated process garbage-collectable. Operations on a stopped mix should probably throw.



 Comments   
Comment by Ghadi Shayban [ 22/Apr/14 10:44 AM ]

On 0.1.278 the mix process terminates when its output channel closes [1].

[1] https://github.com/clojure/core.async/blob/master/src/main/clojure/clojure/core/async.clj#L778





[ASYNC-55] Notification of items dropped from sliding/dropping buffers Created: 12/Feb/14  Updated: 22/Apr/14

Status: Open
Project: core.async
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Chris Perkins Assignee: Rich Hickey
Resolution: Unresolved Votes: 0
Labels: None

Attachments: File drop-notification.diff    
Patch: Code and Test

 Description   

I would like to know when items are dropped from sliding or dropping buffers, so that I can do things like logging and keeping metrics.

The attached patch has one possible mechanism for doing this, by adding an optional second argument to sliding-buffer and dropping-buffer - a one-argument function that will be called with dropped items.



 Comments   
Comment by Ghadi Shayban [ 22/Apr/14 10:41 AM ]

Hi Chris, I think this use-case can be handled the combination of a go process + collection directly without modification to the existing buffer impls. I'd be concerned about making buffers pay for a non-essential field.

I like using the library primitives to make sure they are minimally complete.

(If you really really want to make your own buffer, no one will stop you =) It's still an impl protocol, but it's relatively stable. Just beware that buffer operations run within the context of a channel lock, so make sure to dispatch the overflow function to the async threadpool using dispatch/run).





[ASYNC-58] mult channel deadlocks when untapping a consuming channel whilst messages are being queued/blocked Created: 20/Feb/14  Updated: 22/Apr/14

Status: Open
Project: core.async
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Mathieu Gauthron Assignee: Rich Hickey
Resolution: Unresolved Votes: 0
Labels: deadlock, mult, untap
Environment:

Mac 10.7.5; java version "1.7.0_40"; [org.clojure/clojure "1.5.1"]; [org.clojure/core.async "0.1.267.0-0d7780-alpha"]; Tested with cider and emacs 24.3



 Description   

Use Case:

I have two (or more) listeners tapped onto a mult channel. I want to use them all then have one (or more) of them to leave at will without blocking the other consumer(s) or the publisher. Initially they work fine until one of them wants to stop listening. I thought the listener which drops out needs to (be a good citizen and) untap its channel from mult (otherwise a deadlock is systematic). However if messages are put into the mult before the leaving listener has had a chance to untap its channel, it creates a deadlock on the main thread (which is putting more messages simultaneously). I do not find a way to guarantee that I can untap the channel in time to avoid this race condition.

Other comments:
Once I have reproduced the deadlock, the repl is frozen until I interrupt with ctr-c.
I have also tried to close the tapped channel before untapping it but the result was the same.

Issue:
In the following snippet, the last (println "I'm done. You will never see this") is never reached. The publisher and the remaining consumer (consumer 1) are deadlocked even though consumer 2 was trying to leave in good terms.

Code:

(let [to-mult (chan 1)
m (mult to-mult)]

;;consumer 1
(let [c (chan 1)]
(tap m c)
(go (loop []
(when-let [v (<! c)]
(println "1 Got! " v)
(recur))
(println "1 Exiting!"))))

;;consumer 2
(let [c (chan 1)]
(tap m c)
(go (loop []
(when-let [v (<! c)]
(when (= v 42) ;; exit when value is not 42
(println "2 Got! " v)
(recur)))
(println "2 about to leave!")
(Thread/sleep 5000) ;; wait a bit to exacerbate the race condition
(untap m c) ;; before unsubscribing this reader
(println "2 Exiting."))))

(println "about to put a few messages that work")
(doseq [a (range 10)]
(>!! to-mult 42))
(println "about to put a message that will force the exit of 2")
(>!! to-mult 43)
(println "about to put a few more messages before reader 2 is unsubscribed to show the deadlock")
(doseq [a (range 10)]
(println "putting msg" a)
(>!! to-mult 42))
(println "I'm done. You will never see this"))

Output:
about to put a few messages that work
2 Got! 42
1 Got! 42
2 Got! 42
1 Got! 42
1 Got! 42
2 Got! 42
1 Got! 42
1 Got! 42
2 Got! 42
2 Got! 42
2 Got! 42
2 Got! 1 Got! 42
422 Got! 42

1 Got! 42
1 Got! 42
2 Got! 42
1 Got! 42
about to put a message that will force the exit of 2
1 Got! 42
2 Got! about to put a few more messages before reader 2 is unsubscribed to show the deadlock
42
putting msg 1 Got! 0
2 about to leave!
43
1 Got! 42
putting msg 1
putting msg 2
putting msg 3
1 Got! 42
2 Exiting.

Question: Is this expected? Is there a workaround?



 Comments   
Comment by Ghadi Shayban [ 22/Apr/14 10:18 AM ]

Mathieu, this is probably expected. It's important to note that to guarantee correct ordering/flow when using a mult, you should enforce it on the source/producer side of the mult, and not asynchronously on the tap side.

Mult will deref a stable set taps just before distributing a value to them, and does not adjust dynamically during value distribution except when a tap has been closed [1]. If you would like to stably untap without closing the tap you can/should let the 'producer' do it in an ordered fashion in between values on the input channel.

Knowing that a put occurred to a closed channel is new on release 0.1.278.

In general, walking away on the consuming side of a channel is tricky. Depending on the semantics of your processes, if the producer side of a channel isn't aware that a close! can happen from the consumer side, you might have to launch a draining operation.

(defn drain [c] (go (when (some? (<! c)) (recur))))

Golang disallows closing a read-only channel FWIW [2]

Better documentation is probably warranted.

[1] https://github.com/clojure/core.async/blob/master/src/main/clojure/clojure/core/async.clj#L680-L682
[2] http://golang.org/ref/spec#Close





[ASYNC-57] reify in go macro compile error Created: 20/Feb/14  Updated: 22/Apr/14

Status: Open
Project: core.async
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Minor
Reporter: Lijin Liu Assignee: Rich Hickey
Resolution: Unresolved Votes: 0
Labels: None
Environment:

[org.clojure/clojure "1.5.1"]
[org.clojure/core.async "0.1.267.0-0d7780-alpha"]



 Description   

(go
(reify java.util.Collection
(add [this o]
(println o)
(println this)
true)))

clojure.lang.Compiler$CompilerException: java.lang.RuntimeException: Unable to resolve symbol: this in this context



 Comments   
Comment by Ghadi Shayban [ 22/Apr/14 9:55 AM ]

Reassigning to minor. The go macro will obviously not rewrite parking ops inside the reify. Lifting the reify into a var is a decent workaround.





[ASYNC-35] Using onto-chan with nonending sequence causes non-gc-able, infinitely-looping go block Created: 12/Nov/13  Updated: 20/Apr/14  Resolved: 20/Apr/14

Status: Resolved
Project: core.async
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Brian Lubeski Assignee: Rich Hickey
Resolution: Completed Votes: 0
Labels: None
Environment:

org.clojure/core.async 0.1.256.0-1bf8cf-alpha



 Description   
(close! (to-chan (range)))

The above code causes my CPU to run at around 95% until I kill the process. (NOTE: This eventually leads to an OutOfMemoryError – see ASYNC-32).

Here is what I think is happening: after closing the channel returned by to-chan, all subsequent puts to that channel by the to-chan go block succeed immediately without blocking. Because the to-chan go block never blocks on its drain channel, it runs continuously and can't be GC'd (if I understand things correctly).



 Comments   
Comment by Leon Grapenthin [ 15/Nov/13 10:16 AM ]

I'd expect the behavior. Neither can to-chan know that (range) returns an infinite sequence, nor can it know that it's output channel has been closed.

Comment by Brian Lubeski [ 11/Feb/14 11:27 AM ]

Resolved in 0.1.278.0-76b25b-alpha.





[ASYNC-52] Go block run by multiple threads at the same time for a single chan instance Created: 29/Jan/14  Updated: 20/Apr/14  Resolved: 20/Apr/14

Status: Closed
Project: core.async
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Gerrit Jansen van Vuuren Assignee: Rich Hickey
Resolution: Not Reproducible Votes: 0
Labels: None


 Description   

I'm using channels instead of agents to provide serial access to resources.
The logic is: (ch is buffered)
go loop:
f = read ch
(f)

f is a function that in my test case writes to a output stream, sometimes f will close the output stream, and recreate a new one. The output stream is held in a shared atom. If the go block takes one value after another everything should run fine. The thing is I get an output stream closed exception. After several runs it 'seems' to me that the go block is run by different threads at the same time.

If I change the go block to a thread the error goes away.

To reproduce the error:
clone https://github.com/gerritjvv/fun-utils

and run in leiningen

(use 'fun-utils.chtest)
(test-star)

The file is https://github.com/gerritjvv/fun-utils/blob/master/src/fun_utils/chtest.clj

The go block is on line 24.



 Comments   
Comment by Gerrit Jansen van Vuuren [ 29/Jan/14 5:49 PM ]

please ignore and close this.
I've found the cause: I was creating a new channel on every write .





Generated at Wed Apr 23 10:26:20 CDT 2014 using JIRA 4.4#649-r158309.