tools.nrepl

Flush *out*/*err* after N characters

Details

  • Type: Enhancement Enhancement
  • Status: Closed Closed
  • Priority: Minor Minor
  • Resolution: Completed
  • Affects Version/s: None
  • Fix Version/s: None
  • Component/s: None
  • Labels:
    None
  • Patch:
    Code and Test

Description

Not sure how this should be configured, but it would be nice if doing something like (println large-seq-here) would flush output automatically after some number of characters. This is so that output could stream constantly across the nrepl connection instead of waiting for the entire output to send as one message (assuming no manual flushes).

Activity

Hide
Colin Jones added a comment -

Done, thanks a bunch!

Show
Colin Jones added a comment - Done, thanks a bunch!
Hide
Chas Emerick added a comment -

Sorry for the delay, this fell through the cracks.

Looks good to me, except: can we make the "buffer size" an argument to the session middleware fn? That's where people will want to be able to customize it, if ever.

Commit and close this at will.

Show
Chas Emerick added a comment - Sorry for the delay, this fell through the cracks. Looks good to me, except: can we make the "buffer size" an argument to the session middleware fn? That's where people will want to be able to customize it, if ever. Commit and close this at will.
Hide
Colin Jones added a comment -

Any thoughts on this one? It would be great to be able to print big seqs incrementally.

Show
Colin Jones added a comment - Any thoughts on this one? It would be great to be able to print big seqs incrementally.
Hide
Colin Jones added a comment -

There is a precedent for this, in the sense that many output buffers have some fixed size, and they are flushed when that size has been reached (e.g. http://www.gnu.org/software/libc/manual/html_node/Flushing-Buffers.html, http://docs.oracle.com/javase/6/docs/api/java/io/BufferedWriter.html)

We had talked about potential worries / headaches, where users may get partial lines printed at a time. But after more thought about it, I'm thinking we may be OK: concurrent out/err writes can already interleave in interesting ways, and depending on the placement of (flush) calls (explicitly or implicitly) to make them more atomic is something I'd already have expected to be fragile. Maybe others will see things I don't, though.

The implementation I have uses 1024 as the default, but that's pretty arbitrary. A less dynamic way of setting the limit would be also be fine with me, though since we're already in I/O-land, I'd assume performance isn't going to be bound by the dynamic var lookup.

Show
Colin Jones added a comment - There is a precedent for this, in the sense that many output buffers have some fixed size, and they are flushed when that size has been reached (e.g. http://www.gnu.org/software/libc/manual/html_node/Flushing-Buffers.html, http://docs.oracle.com/javase/6/docs/api/java/io/BufferedWriter.html) We had talked about potential worries / headaches, where users may get partial lines printed at a time. But after more thought about it, I'm thinking we may be OK: concurrent out/err writes can already interleave in interesting ways, and depending on the placement of (flush) calls (explicitly or implicitly) to make them more atomic is something I'd already have expected to be fragile. Maybe others will see things I don't, though. The implementation I have uses 1024 as the default, but that's pretty arbitrary. A less dynamic way of setting the limit would be also be fine with me, though since we're already in I/O-land, I'd assume performance isn't going to be bound by the dynamic var lookup.

People

Vote (0)
Watch (1)

Dates

  • Created:
    Updated:
    Resolved: