<< Back to previous view

[JDBC-37] Provide support for alternate transaction strategies Created: 31/Jul/12  Updated: 15/Sep/13

Status: Open
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Jim Crossley Assignee: Sean Corfield
Resolution: Unresolved Votes: 0
Labels: None

Attachments: File jdbc-37.diff    


The current design of java.jdbc prevents its use as a participant in a distributed XA transaction, where transactional control is delegated to a TransactionManager. It only works with local transactions and absorbs all nested transactions into the outermost one. It'd be nice to have a clean way to override this default behavior.

Comment by Jim Crossley [ 31/Jul/12 5:49 PM ]

I'll try to work up a straw-man solution and submit a pull-request.

Comment by Sean Corfield [ 31/Jul/12 5:59 PM ]

Thanx Jim. I agree the current setup isn't ideal in that area. As for "pull-request", I assume you mean a patch attached to this JIRA ticket (since Clojure and contrib projects cannot accept pull requests). Please also make sure you get a Contributor's Agreement on file per http://clojure.org/contributing

Comment by Jim Crossley [ 31/Jul/12 6:25 PM ]

Sure thing, Sean

Comment by Jim Crossley [ 02/Aug/12 6:31 PM ]

Sean, here's a first whack introducing a new dynamic var for transaction strategy.

I know the desire is for a new API with explicit parameter passing, and when that vision congeals, I'm happy to help migrate, but I'd like to always have the option of the dynamic binding as well.

My thinking is that if a tx strategy function is passed as a parameter, it'll override whatever may be set in the dynamic var, but how it gets passed is still unclear to me. I considered adding an optional key to the db-spec, but wanted to run that by you first.

The Agreement is in the mail. I appreciate your feedback.

Comment by Sean Corfield [ 07/Apr/13 3:46 AM ]

There have been a lot of code changes lately and this patch no longer applies cleanly. Can you submit a new patch against the latest master? Thanx!

Comment by Jim Crossley [ 30/Apr/13 5:43 PM ]

Sean, I'm not sure I'm totally smitten with the new "transaction?" boolean parameter. At first glance, this seems an awkward way to define a transaction consisting of multiple statements. Can you provide an example usage with the new API of say, inserting, updating and deleting data within a single transaction? I'm hoping an example will clear up my confusion and I can propose a way of parameterizing a particular strategy for executing any transaction.

Comment by Sean Corfield [ 30/Apr/13 5:47 PM ]

See "Using Transactions" here https://github.com/clojure/java.jdbc/blob/master/doc/clojure/java/jdbc/UsingSQL.md

Comment by Jim Crossley [ 30/Apr/13 6:30 PM ]

Yes, I saw that, and it seemed to confirm that my original patch should work with minor tweaking. And then I was surprised to see the "transactional?" option in the source. I was curious how you expect it to be used.

Comment by Sean Corfield [ 30/Apr/13 6:58 PM ]

Folks have asked for the ability to run various functions without an implicit transaction inside them - in fact for some DBs, certain commands cannot be run inside a transaction which was a problem with the old API where you couldn't turn that off. It allows users to have more explicit control over transactions and it's also a convenient "implementation artifact" for nesting calls.

So, bottom line: I expect very few users to actually use it explicitly, unless they specifically need to turn off the implicit transaction wrapping.

And for most of the API that users will interact with, they don't even need to worry about it.

Does that help?

Comment by Sean Corfield [ 30/Apr/13 7:04 PM ]

Addressing your question about your patch: Clojure/core specifically wanted java.jdbc to move away from dynamically bound variables, which the new API / implementation achieves (given that all the old API that depends on dynamic-vars is deprecated now and will be completely removed before 1.0.0).

If all you need is the ability to specify how the transaction function does its job, via a HOF, then I'll have a look at what that would take in the context of the new 'world'...

Comment by Jim Crossley [ 30/Apr/13 7:33 PM ]

Regarding dynamically bound variables, I think it's very common and accepted – even canonical? – to use them (or some ThreadLocal-like variant) to implement transactions. I would hate to make the api awkward just to avoid them.

But to answer the core question, yes, I think it's important to provide an alternative to the assumptions encoded into db-transaction*, e.g. "Any nested transactions are absorbed into the outermost transaction." I might prefer a strategy in which a nested transaction suspends the current one and creates another, assuming the driver supports it.

But my primary reason for this, as you know, is to somehow inject a "null strategy" to support distributed transactions, delegating the commit/rollback choice to an external "transaction manager".

One question: what do you mean by "implementation artifact" for nesting calls?

Comment by Jim Crossley [ 01/May/13 9:46 AM ]

Sean, how do you feel about turning the :transactional? option to a function instead of a boolean? And that function represents the :tx-strategy used, which could assume the value of a dynamically bound value *tx-strategy* by default, and its value would be db-transaction* by default. And folks could set it to nil to turn off transactions, i.e. :tx-strategy nil or perhaps :tx-strategy :none would equate to :transactional? false. I think that may satisfy core's recommendation for dynamic variables not being the only way to alter behavior. Make sense at all?

Comment by Sean Corfield [ 01/May/13 10:57 AM ]

I was looking at the code again last night and came to much the same conclusion! I'll take a run at that this weekend (but I'm not adding a dynamic variable - Clojure/core were very clear about their reasons for not wanting those in code except in extremely rare situations in code that is guaranteed to be single-threaded).

Comment by Jim Crossley [ 01/May/13 11:22 AM ]

You're killing me!

Without the dynamic var, I can't see any way to transparently allow the db code to participate in a distributed transaction. Can we at least agree that transactional code is guaranteed to be effectively single-threaded? And by this I mean that a transaction must be associated with a single connection, so any thread using that connection must have exclusive access. Do you really want to force folks using distributed transactions to pass the tx strategy in with every call? I don't think adding the dynamic var and the option to override it violates the spirit of this guideline: "If you present an interface that implicitly passes a parameter via dynamic binding (e.g. db in sql), also provide an identical interface but with the parameter passed explicitly."

What was the specific feedback you received that contradicts that?

Comment by Jim Crossley [ 03/May/13 6:44 AM ]

Sean, I came up with a different solution for using java.jdbc with an XA connection that obviates this issue. So even though I think it's useful to provide both a dynamic var and a function option as a means to override the logic of db-transaction*, I no longer have a need for it.

Keep up the good work on java.jdbc!

Comment by Sean Corfield [ 03/May/13 12:40 PM ]

I like problems that go away of their own accord but I still like the idea of making the transaction strategy a function so I'll look at that anyway as a possible (breaking) change for alpha2.

Comment by Jim Crossley [ 03/May/13 12:50 PM ]

Something else you might consider: define a protocol function that encapsulates your commit/rollback/setAutoCommit logic inside db-transaction* and extend it to java.sql.Connection. That way, folks could extend their more specific types, e.g. XAConnection, to your protocol (and avoid making those calls that aren't allowed by XA).

Comment by Sean Corfield [ 15/Sep/13 9:41 PM ]

commentHaving spent some time looking at the transaction-as-function option, I don't think that's a great idea - partly because I'm not sure what alternative functions would look like. Jim's suggestion of a protocol for the internal transaction logic seems like a good one but at this point I'm not familiar enough with alternative strategies to know exactly how the protocol should look (and which parts of the internal db-transaction* logic should be implemented that way) so I'm going to punt on this for 0.3.0 but leave it open for the future.

[JDBC-48] Support stored procedures with CallableStatement Created: 15/Mar/13  Updated: 15/Sep/13

Status: Open
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Jeremy Heiler Assignee: Sean Corfield
Resolution: Unresolved Votes: 1
Labels: None


JDBC's CallableStatement provides support for calling stored procedures. More specifically, it allows you to register OUT parameters which will become the statements (possibly many) ResultSet objects. A CallableStatement is a PreparedStatement, so I am hoping there wont be too much involved with regard to executing them. The main difference is being able to register and consume OUT parameters.

I'll be hacking on this, so patches are forthcoming. Any input is appreciated.

Comment by Sean Corfield [ 15/Mar/13 10:51 PM ]

I've never used stored procs (I don't like the complexity that I've seen them add to version control, change management and deployment) so I'm afraid I can't offer any input - but I really appreciate you taking this on! Thank you!

Comment by Sean Corfield [ 15/Sep/13 4:20 PM ]

Post 0.3.0. See also JDBC-64.

[JDBC-64] Support multiple result sets? Created: 03/Jul/13  Updated: 15/Dec/14

Status: Open
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Unresolved Votes: 0
Labels: None


Useful for stored procedure results:

(defn call-stored-proc [connection]
        ["{call someProc()}"]
        :as-arrays? true))

Java code to handle multiple result sets:

public static void executeProcedure(Connection con) {
       try {
          CallableStatement stmt = con.prepareCall(...);
          .....  //Set call parameters, if you have IN,OUT, or IN/OUT parameters
          boolean results = stmt.execute();
          int rsCount = 0;
          //Loop through the available result sets.
         while (results) {
               ResultSet rs = stmt.getResultSet();
               //Retrieve data from the result set.
               while (rs.next()) {
                ....// using rs.getxxx() method to retieve data
            //Check for next result set
            results = stmt.getMoreResults();
       catch (Exception e) {

Comment by Sean Corfield [ 15/Sep/13 4:20 PM ]

Post 0.3.0, ideally after adding stored proc support properly (see JDBC-48).

Comment by Kyle Cordes [ 15/Sep/13 9:45 PM ]

With or without SPs, this would be an excellent addition; with some RDBMSs, use of compound statement (or SPs) with multiple result sets is relatively common.

Comment by Sean Corfield [ 02/Apr/14 6:07 PM ]

Discussion with Pieter Laeremans:

Sean: My thinking is that I would add :multi-result? to execute! and query and then arrange for them to return sequences of result sets. Unraveling the calls so multi-result? can work cleanly inside those functions would be the hard part

Pieter: That sounds fine by me. But there's something a bit more subtle I guess,
Now you can pass a function row-fn to transform rows, in the multi-resultset case it would perhaps be more appropriate
to pass on a seq of row-fns, so that a different function can be used on different rows.

Comment by Alexey Naumov [ 15/Dec/14 2:39 PM ]

Any updates on the issue?

Comment by Sean Corfield [ 15/Dec/14 2:58 PM ]

No update yet. No one has submitted a patch and I've been too busy to look at this in detail.

[JDBC-99] The age of reduce is upon us Created: 31/Aug/14  Updated: 08/Sep/14

Status: Open
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Kevin Downey Assignee: Sean Corfield
Resolution: Unresolved Votes: 3
Labels: None


jdbc code is pretty heavily in to resource management, you have connections, result sets, prepared statements all of which require lifetime management.

clojure.java.jdbc is built around result-set-seqs, sequences of results. but lazy-sequences provide no good way to manage the lifetime of resources behind the sequences.

clojure provides a mechanism to define a collection in terms of reduce and a growing collection of ways to manipulate and transform reducible collections.

a collection that knows how to reduce itself has a means of managing the lifetime of associated resources, the lifetime of the reduce operation.

so it seems clear that result-set-seqs should be replaced with result-set-reducibles.

Comment by Ghadi Shayban [ 08/Sep/14 1:15 PM ]

Something like this would be amenable to reduce/transduce. Used in conjunction with db-query-with-resultset

Half of the knobs on jdbc/query are to control seq realization, but instead should defer to reduce/reduced

(into [] (take 5000) (queryr "select * from foo"))

The reducible collection returned should be one-shot, cleaning up resources, and it would be an error to run more than once

[JDBC-137] make 'result-set-seq' accept customized result-set-read-column to support multi-database environment Created: 07/Aug/16  Updated: 12/Aug/16

Status: Open
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Zhou Xiangtao Assignee: Sean Corfield
Resolution: Unresolved Votes: 0
Labels: None


Using java.jdbc with postgres composite type, the common way is extend the IResultSetReadColumn protocol. When there are multiple databases in use, every database should specify it's own column reader.
Add option to 'result-set-seq' support custom function to replace IResultSetReadColumn may be a solution for this situation.

Comment by Sean Corfield [ 12/Aug/16 10:03 PM ]

The closest parallel with setting parameters would be to have a :read-columns option (to result-set-seq and upstream in several calling functions, as well as a per-database default in the db-spec itself).

Like :set-parameters, this :read-columns function would be expected to map over the metadata itself – it would be passed the result set object and the result set metadata object, and the default implementation would map over (range 1 (inc (.getColumnCount rsmeta))) and call (.getObject rs idx) and then convert that to Clojure based on the corresponding column in rsmeta.

That feels like a lot of "heavy lifting" but it's what folks have to do if they need per-database set-parameters behavior and it feels like the right approach (given a result set and its metadata, construct an entire row).

Would that solve your problem sufficiently?

Generated at Sat Aug 27 06:55:14 CDT 2016 using JIRA 4.4#649-r158309.