<< Back to previous view

[JDBC-93] Add support for read-only transactions Created: 26/Mar/14  Updated: 30/Jun/14  Resolved: 30/Jun/14

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Gerrit Hentschel Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None
Environment:

[org.clojure/java.jdbc "0.3.3"]



 Description   

Currently it is possible to set the transaction isolation only (db-transaction*). In our case we use SERIALIZABLE isolation, so it makes sense to mark read-only transactions using Connection/setReadOnly.

Thanks!



 Comments   
Comment by Sean Corfield [ 26/Mar/14 8:56 PM ]

Can you explain a bit more about how you'd expect this to work? Perhaps show me the sort of code / API you'd want to write / use for this?

Comment by Gerrit Hentschel [ 27/Mar/14 3:07 AM ]

Sure, actually I created a patch that describes how I would like to use it, but when trying to attach the file I get a 500-Internal Server Error.

I hope it is okay to just paste it here:

From 4a4ea3d478f7ddb717764499b7d7c3e5efb5caee Mon Sep 17 00:00:00 2001
From: Gerrit Hentschel <gerrit@doo.net>
Date: Thu, 27 Mar 2014 08:57:54 +0100
Subject: [PATCH] add support for read-only transactions

---
 src/main/clojure/clojure/java/jdbc.clj | 21 ++++++++++++++-------
 1 file changed, 14 insertions(+), 7 deletions(-)

diff --git a/src/main/clojure/clojure/java/jdbc.clj b/src/main/clojure/clojure/java/jdbc.clj
index 01c4f21..47a6d74 100644
--- a/src/main/clojure/clojure/java/jdbc.clj
+++ b/src/main/clojure/clojure/java/jdbc.clj
@@ -559,16 +559,20 @@ compatibility but it will be removed before a 1.0.0 release." }
   The isolation option may be :none, :read-committed, :read-uncommitted,
   :repeatable-read, or :serializable. Note that not all databases support
   all of those isolation levels, and may either throw an exception or
-  substitute another isolation level."
-  [db func & {:keys [isolation]}]
+  substitute another isolation level.
+  The read-only? option puts the transaction in read-only mode."
+  [db func & {:keys [isolation read-only?]}]
   (if (zero? (get-level db))
     (if-let [^java.sql.Connection con (db-find-connection db)]
       (let [nested-db (inc-level db)
             auto-commit (.getAutoCommit con)
-            old-isolation (.getTransactionIsolation con)]
+            old-isolation (.getTransactionIsolation con)
+            old-read-only? (.isReadOnly con)]
         (io!
          (when isolation
            (.setTransactionIsolation con (isolation isolation-levels)))
+         (when read-only?
+           (.setReadOnly con true))
          (.setAutoCommit con false)
          (try
            (let [result (func nested-db)]
@@ -583,9 +587,11 @@ compatibility but it will be removed before a 1.0.0 release." }
              (db-unset-rollback-only! nested-db)
              (.setAutoCommit con auto-commit)
              (when isolation
-               (.setTransactionIsolation con old-isolation))))))
+               (.setTransactionIsolation con old-isolation))
+             (when read-only?
+               (.setReadOnly con old-read-only?))))))
       (with-open [^java.sql.Connection con (get-connection db)]
-        (db-transaction* (add-connection db con) func :isolation isolation)))
¯+        (db-transaction* (add-connection db con) func :isolation isolation :read-only? read-only?)))
     (try
       (func (inc-level db))
       (catch Exception e
@@ -595,8 +601,9 @@ compatibility but it will be removed before a 1.0.0 release." }
   "Evaluates body in the context of a transaction on the specified database connection.
   The binding provides the database connection for the transaction and the name to which
   that is bound for evaluation of the body. The binding may also specify the isolation
-  level for the transaction, via the :isolation option.
-  (with-db-transaction [t-con db-spec :isolation level]
+  level for the transaction, via the :isolation option and/or set the transaction to
+  read-only, via the :read-only? option.
+  (with-db-transaction [t-con db-spec :isolation level :read-only? true]
     ... t-con ...)
   See db-transaction* for more details."
   [binding & body]
-- 
1.8.3.4 (Apple Git-47)
Comment by Sean Corfield [ 27/Mar/14 11:35 AM ]

Thanks Gerrit. That seems nice and simple. I can't take the patch exactly as-is, because you do not appear to be a formal contributor yet - http://clojure.org/contributing - but this definitely points me in the right direction to create tests and implement something that should provide the functionality you're after.

Comment by Gerrit Hentschel [ 28/Mar/14 3:30 AM ]

Great! Glad it helped.
I will take care of becoming a formal contributor, but feel free to use that patch as you see fit.
Thanks Sean.

Comment by Sean Corfield [ 30/Jun/14 9:46 PM ]

Will be in 0.3.5





[JDBC-94] Parsing of sqlite or HSQLDB in-memory databases fail Created: 01/Apr/14  Updated: 30/Jun/14  Resolved: 30/Jun/14

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Peter Schwarz Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: bug
Environment:

Using the following driver dependencies:
[org.hsqldb/hsqldb "2.3.2"]
[org.xerial/sqlite-jdbc "3.7.2"]



 Description   

When entering a sqlite memory db uri: "jdbc:sqlite::memory:", the following error occurs:

SQLException out of memory org.sqlite.DB.throwex (DB.java:370)

This is a vague exception which occurs when the db is not actually created (see https://bitbucket.org/xerial/sqlite-jdbc).

Likewise, when parsing a HSQLDB in-memory db url: "jdbc:hsqldb:mem:testdb", the following error occurs:

2014-04-01T14:13:11.996-0500  SEVERE  could not reopen database
org.hsqldb.HsqlException: Database lock acquisition failure: lockFile: org.hsqldb.persist.LockFile@2be0e04[file =/.lck, exists=false, locked=false, valid=false, ] method: openRAF reason: java.io.FileNotFoundException: /.lck (Permission denied)

It appears to be creating the equivalent connection for

{:subprotocol "hsqldb" :subname ""}
but should be creating
{:subprotocol "hsqldb" :subname "mem:testdb" }



 Comments   
Comment by Sean Corfield [ 01/Apr/14 3:17 PM ]

Thanx Peter. That sounds reasonably easy to repro so I'll create a couple of test cases shortly.

Comment by Sean Corfield [ 30/Jun/14 9:09 PM ]

Finally taking a look at these - sorry for the delay - and it seems that java.jdbc expects a string argument to be something you can simply construct a URI out of and then pass to get-connection so this suggests that either java.net.URI might be monkeying with the value or parse-properties-uri can't figure out how to reconstruct the appropriate spec map from such a URI.

Comment by Sean Corfield [ 30/Jun/14 9:34 PM ]

Will be in 0.3.5.





[JDBC-98] Add execute-return-keys! Created: 30/Jun/14  Updated: 30/Jun/14  Resolved: 30/Jun/14

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Tejas Dinkar Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None
Environment:

Developed on OSX


Attachments: Text File 0001-Adding-execute-return-keys-which-allows-insertions-w.patch    
Patch: Code and Test

 Description   

Added a function execute-return-keys! which can execute a prepared statement and return the key inserted.

Though I only support a single insert, I'm returning a list of values, similar to the insert! function



 Comments   
Comment by Sean Corfield [ 30/Jun/14 11:32 AM ]

Given the availability of insert! which returns keys when it can, what use case does this satisfy?

Comment by Tejas Dinkar [ 30/Jun/14 11:39 AM ]

insert! creates a sql query every time it's called (from string).

I would like to reuse the prepared statement across multiple insert! calls.

I originally started making insert! accept a prepared statement, patching db-do-prepared-return-keys, but I thought the execute-return-keys! would be a better place for it.

Comment by Sean Corfield [ 30/Jun/14 11:54 AM ]

Why not just use db-do-prepared-return-keys for this?

Adding an arbitrary variant of execute! is not a good approach, in my opinion. If you need something lower level than insert! or execute! as they stand, all the pieces should be there already.

Comment by Tejas Dinkar [ 30/Jun/14 12:05 PM ]

I can resubmit the patch to only change db-do-prepared-return-keys.

In the current patch, I did patch db-do-prepared-return-keys, but I just created execute-return-keys! as an alternative front end for it.

Comment by Sean Corfield [ 30/Jun/14 12:26 PM ]

Ah, db-do-prepared-return-keys only supports one parameter group, so I can see why it won't work for you here.

So this is to address the one specific case where you want to create a single prepared statement and run it for multiple parameter groups... If that really isn't possible with the current API, a generic way to do so should be added. You can easily create a prepared statement that will return generated keys, using prepare-statement and query will accept a prepared statement instead of SQL, so this should just be a matter of ensuring execute! and/or db-do-commands can accept a prepared statement instead of SQL. That would be a more generic solution.

Comment by Tejas Dinkar [ 30/Jun/14 12:29 PM ]

yes, but sadly execute! only returns the number of rows updated, not the actual rows (ids), though it accepts a prepared statement.

So there is a need for some generic API that is similar to execute!, but returns the rows.

Comment by Sean Corfield [ 30/Jun/14 12:32 PM ]

Looking more closely at db-do-prepared - which execute! uses - it should accept a prepared statement instead of a SQL string and thus so should execute!.

Therefore, you should be able to do something like:

(execute! db-spec [(prepare-statement (get-connection db-spec) "insert ..." :return-keys true) params params])
Comment by Sean Corfield [ 30/Jun/14 12:51 PM ]

And I've just noticed that this functionality is new in 0.3.4 which has not yet been released! OK, so if I make a new release, you will have the functionality you need.

Comment by Tejas Dinkar [ 30/Jun/14 1:07 PM ]

(yes, I know about that patch to 0.3.4, I submitted that patch).

I believe the only way to get the generated keys, is via the .getGeneratedKeys function, which is used in [1]. It is only in the execution path for insert!, not execute!.

That said, it is possible to do the following (just tried it), so maybe this patch isn't as useful as I'd thought.

(let [prepared-statement (j/prepare-statement ... :return-keys true)]
  (j/execute! db prepared-statement)
  (j/result-set-seq (.getGeneratedKeys prepared-statement)))

[1] https://github.com/clojure/java.jdbc/blob/master/src/main/clojure/clojure/java/jdbc.clj#L684

Comment by Sean Corfield [ 30/Jun/14 1:23 PM ]

I've made the 0.3.4 release. The code you show looks like a reasonable compromise given my reluctance to extend the API in somewhat arbitrary ways

I'm going to close this.

Comment by Sean Corfield [ 30/Jun/14 1:25 PM ]

With the 0.3.4 release allowing execute! to accept a prepared statement and the reasonably clean way to execute a prepared statement and then get the generated keys, I don't think a change to the API is necessary.





[JDBC-97] Support for setting options such as setTimeout on the PreparedStatement Created: 11/Jun/14  Updated: 11/Jun/14  Resolved: 11/Jun/14

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: David Powell Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None


 Description   

It is currently rather difficult to set options such as the stmt.setTimeout() as the creation of the PreparedStatement is encapsulated inside db-query-with-resultset. Some sort of option or hook for this would be cool.



 Comments   
Comment by Sean Corfield [ 11/Jun/14 12:11 PM ]

You can pass a PreparedStatement instead of a SQL string - so you can build your PreparedStatement - using prepare-statement or manually - and then pass it into query as the first element of the vector that would normally contain the SQL string plus parameters.

Comment by Sean Corfield [ 11/Jun/14 12:11 PM ]

Already possible with existing API.

Comment by David Powell [ 11/Jun/14 12:35 PM ]

Ah cool - I was trying to do this with execute!, but I see that that won't be possible until release 0.3.4.





[JDBC-96] execute! should accept a PreparedStatement Created: 13/May/14  Updated: 18/May/14  Resolved: 18/May/14

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Tejas Dinkar Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: enhancement, patch

Attachments: Text File 0001-execute-can-now-accept-prepared-statements-as-well.patch    
Patch: Code and Test

 Description   

This patch to clojure.java.jdbc allows you to pass in a prepared statement to the execute function.

I have already signed the CLA (Tejas Dinkar).

I can also open a pull request from here: https://github.com/gja/java.jdbc/tree/execute-should-accept-a-prepared-statement



 Comments   
Comment by Sean Corfield [ 14/May/14 12:26 AM ]

This looks very nice Tejas, thank you! I'll apply the patch at the weekend and verify everything and let you know if I hit any issues.

Comment by Sean Corfield [ 18/May/14 7:11 PM ]

Patch applied. Will be in 0.3.4 version of library.





[JDBC-92] Provide more intuitive db-spec format Created: 07/Mar/14  Updated: 28/Apr/14  Resolved: 28/Apr/14

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

All this :subname / :subprotocol stuff is fine but it would be nice to just be able to say:

{:dbtype "mysql" :dbname "foo"}

and have the :subname be constructed from defaults, optionally allowing :host / :port to override 127.0.0.1 and (for mysql, 3306; for mssql/jtds 1433).



 Comments   
Comment by Glen Mailer [ 27/Apr/14 4:29 AM ]

It would also be helpful to support use of a URI for config, as used in systems like heroku.

This link seems to be the best example I can find:

http://turbogears.org/1.0/docs/DbUri.html

Comment by Sean Corfield [ 28/Apr/14 9:16 PM ]

Glen, URIs are already supported. You can see examples in the test suite: https://github.com/clojure/java.jdbc/blob/master/src/test/clojure/clojure/java/test_jdbc.clj#L90-L98

Comment by Sean Corfield [ 28/Apr/14 9:18 PM ]

See https://github.com/clojure/java.jdbc/commit/4667bdd86e73870cbb1e369c65fe8262fa964637

Will be in 0.3.4 release.





[JDBC-91] Issue with H2 "script" command Created: 20/Feb/14  Updated: 15/Apr/14

Status: In Progress
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Mike Anderson Assignee: Sean Corfield
Resolution: Unresolved Votes: 0
Labels: None


 Description   

While trying to execute an H2 "script" statement as follows:

(defn script []
(jdbc/execute! db [(str "script to '" "dbscript.txt" "';")]))

I get the following error message:

org.h2.jdbc.JdbcSQLException: Method is not allowed for a query. Use execute or executeQuery instead of executeUpdate; SQL statement:
script to 'dbscript.txt'; [90001-175]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:332)
at org.h2.message.DbException.get(DbException.java:172)
at org.h2.message.DbException.get(DbException.java:149)
at org.h2.message.DbException.get(DbException.java:138)
at org.h2.command.Prepared.update(Prepared.java:200)
at org.h2.command.CommandContainer.update(CommandContainer.java:79)
at org.h2.command.Command.executeUpdate(Command.java:253)
at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:154)
at org.h2.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:140)
at clojure.java.jdbc$db_do_prepared$fn__339.invoke(jdbc.clj:639)
at clojure.java.jdbc$db_transaction_STAR_.doInvoke(jdbc.clj:514)
at clojure.lang.RestFn.invoke(RestFn.java:425)
at clojure.java.jdbc$db_do_prepared.doInvoke(jdbc.clj:638)
at clojure.lang.RestFn.invoke(RestFn.java:442)
at clojure.java.jdbc$execute_BANG_$execute_helper__368.invoke(jdbc.clj:738)
at clojure.java.jdbc$execute_BANG_.doInvoke(jdbc.clj:742)

It's possible that I'm just doing something stupid as I don't really knwo the internals of java.jdbc, but it looks like this is a bug with the type of query being constructed in "db_do_prepared"?



 Comments   
Comment by Sean Corfield [ 28/Feb/14 1:44 AM ]

Mike, since "script to file.txt;" is not a query or update, I would expect db-do-commands to be the right function to call here, not execute! - can you try that and let me know if it works?

Comment by Mike Anderson [ 14/Apr/14 10:27 PM ]

Hi Sean, I still seem to get an error:

(j/db-do-commands db (str "SCRIPT TO '" "dbscript.txt" "';"))
=>
JdbcBatchUpdateException Method is not allowed for a query. Use execute or executeQuery instead of executeUpdate; SQL statement:
SCRIPT TO 'dbscript.txt'; [90001-175] org.h2.jdbc.JdbcStatement.executeBatch (JdbcStatement.java:672)

Comment by Mike Anderson [ 14/Apr/14 10:33 PM ]

Interestingly, this seems to work:

(j/query db [(str "SCRIPT TO '" "dbscript.txt" "';")])

So I have a good workaround at least. Not sure what the right overall solution is - maybe some clearer error messages or docstrings regarding what is / isn't allowed with each type of query would be good?

Comment by Sean Corfield [ 14/Apr/14 11:03 PM ]

Well, I don't have much control over what the JDBC driver itself accepts or doesn't accept for various commands but perhaps introducing a variant of db-do-commands that allows you to specify exactly what method on the PreparedStatement gets called would be helpful here. Or maybe you could just create a PrepatedStatement object - there's an API for that - and run non-standard commands thru that directly?

Comment by Mike Anderson [ 15/Apr/14 11:02 PM ]

Well I'm not a JDBC expert so don't want to give too strong an opinion on API design

My feeling is that some kind of variant of `db-do-commends`, perhaps with keyword arguments (which override a sensible default) would be useful. But that is only a hunch... and I'm not sure if it conflicts with any other assumptions in the design. And I certainly wouldn't want to complicate the API design if this is just a corner case / H2 quirk.

Perhaps just docstring changes that make clear the usage of executeQuery vs. execute vs. executeBatch etc. would be sufficient? I think that was what was fundamentally causing me some confusion.

Anyway I have a good workaround now. So feel free to close / re-purpose this issue as you wish!





[JDBC-95] Evaluate body in context of provided java.sql.Connection without closing it finally Created: 14/Apr/14  Updated: 14/Apr/14  Resolved: 14/Apr/14

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Esko Lahti Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None


 Description   

There are situations where:

  • There is Java code calling Clojure functions that access database.
  • Java code has already opened java.sql.Connection instance that should be to access the database instead of opening new connection at the Clojure side.

The with-db-connection macro could be used for this with {:connection conn} db spec. However as it wraps the body inside a with-open macro the provided connection will be closed in the end.

Practically the following macro could fix the problem. However I don't know if the already existing with-db-connection macro should be extended instead. Also skipped the documentation as this is only an example:

(defmacro with-provided-conn
[binding & body]
`(let [~(first binding) (jdbc/add-connection {} ~(second binding))]
~@body))



 Comments   
Comment by Sean Corfield [ 14/Apr/14 11:01 AM ]

with-db-connection is a recent addition - as a convenience for users who want to open and close a connection for a specific piece of code. The default mode of operation in java.jdbc wass for users to manage their own connection. Your macro removes useful information from the db-spec. A more useful approach would be: (jdbc/add-connection db-spec (jdbc/get-connection db-spec)) so that the db-spec is maintained in the "connected spec". However, that requires the db-spec be passed in and locks you into calling get-connection which not all code may want. What you're asking for is not provided for a reason: it isn't universally applicable to all java.jdbc users - therefore I do not think it belongs in the library.





[JDBC-64] Support multiple result sets? Created: 03/Jul/13  Updated: 02/Apr/14

Status: Open
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Useful for stored procedure results:

call_proc.clj
(defn call-stored-proc [connection]
      (jdbc/query
        (myapp.db/connection)
        ["{call someProc()}"]
        :as-arrays? true))

Java code to handle multiple result sets:

MultiResults.java
public static void executeProcedure(Connection con) {
       try {
          CallableStatement stmt = con.prepareCall(...);
          .....  //Set call parameters, if you have IN,OUT, or IN/OUT parameters
    
          boolean results = stmt.execute();
          int rsCount = 0;
    
          //Loop through the available result sets.
         while (results) {
               ResultSet rs = stmt.getResultSet();
               //Retrieve data from the result set.
               while (rs.next()) {
                ....// using rs.getxxx() method to retieve data
               }
               rs.close();
    
            //Check for next result set
            results = stmt.getMoreResults();
          } 
          stmt.close();
       }
       catch (Exception e) {
          e.printStackTrace();
       }
    }


 Comments   
Comment by Sean Corfield [ 15/Sep/13 4:20 PM ]

Post 0.3.0, ideally after adding stored proc support properly (see JDBC-48).

Comment by Kyle Cordes [ 15/Sep/13 9:45 PM ]

With or without SPs, this would be an excellent addition; with some RDBMSs, use of compound statement (or SPs) with multiple result sets is relatively common.

Comment by Sean Corfield [ 02/Apr/14 6:07 PM ]

Discussion with Pieter Laeremans:

Sean: My thinking is that I would add :multi-result? to execute! and query and then arrange for them to return sequences of result sets. Unraveling the calls so multi-result? can work cleanly inside those functions would be the hard part

Pieter: That sounds fine by me. But there's something a bit more subtle I guess,
Now you can pass a function row-fn to transform rows, in the multi-resultset case it would perhaps be more appropriate
to pass on a seq of row-fns, so that a different function can be used on different rows.





[JDBC-90] Add two subprotocol - classname mappings for oracle support Created: 17/Feb/14  Updated: 28/Feb/14  Resolved: 28/Feb/14

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Niels van Klaveren Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Currently, Oracle subprotocols and classnames are missing. The Oracle jdbc driver supports two subprotocols, "oracle:thin" and "oracle:oci" (for thin client and oracle client support). Both need to be mapped to the class "oracle.jdbc.OracleDriver".



 Comments   
Comment by Sean Corfield [ 28/Feb/14 1:39 AM ]

Will be in 0.3.4





[JDBC-87] Laziness on metadata-result results in closed resultset under Microsoft SQL Created: 06/Jan/14  Updated: 30/Jan/14  Resolved: 30/Jan/14

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Niels van Klaveren Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Getting a collection of tables, and then using these to get the columns inside the tables work swimmingly under Oracle, but throws an exception under Microsoft SQL (driver 4.0.2206.100)

CompilerException com.microsoft.sqlserver.jdbc.SQLServerException: The result set is closed., compiling:(form-init602030969958815921.clj:2:3)

 (def tables
    (jdbc/with-db-metadata [m db]
                           (->>
                             (jdbc/metadata-result  (.getTables m nil nil nil (into-array String ["TABLE"])))
                             (map :table_name)
                             doall)))

  (def columns
    (jdbc/with-db-metadata [m db]
                      (->>
                        tables
                        (mapcat #(jdbc/metadata-result (.getColumns m nil nil % nil)))
                        doall)))

The problem lies in the mapcat part. Using only map works fine, but there's no way to concat before the doall, since that results in the same exception.

Changing metadata-result from

(if (instance? java.sql.ResultSet rs-or-value)
    (result-set-seq rs-or-value :identifiers identifiers :as-arrays? as-arrays?)
    rs-or-value)

to

(if (instance? java.sql.ResultSet rs-or-value)
    (doall (result-set-seq rs-or-value :identifiers identifiers :as-arrays? as-arrays?))
   rs-or-value)

fixes things, but I'm not entirely sure of the consequences that would entail in case laziness would be intended.



 Comments   
Comment by Niels van Klaveren [ 06/Jan/14 7:23 AM ]

Giving metadata-result a result-set-fn parameter defaulting to either identity or doall would work, I guess.

(defn metadata-result
  "If the argument is a java.sql.ResultSet, turn it into a result-set-seq,
   else return it as-is. This makes working with metadata easier.
   Also accepts :identifiers and :as-arrays? to control how the ResultSet
   is transformed and returned. See query for more details."
  [rs-or-value & {:keys [identifiers as-arrays? result-set-fn]
                  :or {identifiers str/lower-case result-set-fn doall}}]
  (if (instance? java.sql.ResultSet rs-or-value)
    (result-set-fn (result-set-seq rs-or-value :identifiers identifiers :as-arrays? as-arrays?))
    rs-or-value))
Comment by Sean Corfield [ 06/Jan/14 11:15 AM ]

Yes, I think that is probably the right solution. I'll review it in the context of query etc and make sure it would create a consistent API. It might be worthwhile having :row-fn as well but I need to compare the functions and think carefully about the impact of those changes.

Thanx for spotting the problem!

Comment by Sean Corfield [ 30/Jan/14 1:34 AM ]

Add :row-fn and :result-set-fn to metadata-result





[JDBC-89] If query is called with a bare SQL string instead of a vector, SQLite driver dumps Created: 29/Jan/14  Updated: 30/Jan/14  Resolved: 30/Jan/14

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   
(query db "select * from test") ;; should be a vector

Causes SQLite to dump and probably isn't very pretty on other DBs. java.jdbc should either validate arguments better, or accept a string and package it up itself.



 Comments   
Comment by Sean Corfield [ 30/Jan/14 12:26 AM ]
(query db "select * from fruit")

is now legal but will remain undocumented. Since optional positional arguments are expected, any attempt to call query with SQL and params not in a vector will throw an IllegalArgumentException anyway. This seems like a reasonable compromise.





[JDBC-88] Update :arglists metadata so it no longer confuses Eastwood Created: 13/Jan/14  Updated: 13/Jan/14

Status: Open
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Unresolved Votes: 0
Labels: None


 Description   

:arglists metadata should be valid Clojure syntax. Examples should be moved into docstrings.



 Comments   
Comment by Andy Fingerhut [ 13/Jan/14 4:02 PM ]

Eastwood's use of :arglists for :wrong-arity linter checking is definitely 'at odds' with the way java.jdbc and some other libraries use it for clearer documentation.

I would recommend leaving :arglists as-is for java.jdbc, and wait for the next version of Eastwood to add a capability to have something like an :eastwood-arglists for functions that explicitly replace their :arglists like this. Such a capability might best belong in Eastwood itself, since it is the new tool on the block.

As an example, I was considering that if a fn/macro explicitly chose to be Eastwood-friendly in this way, it could have an :arglists and :eastwood-arglists explicitly, for those few vars that replace their :arglists. When one is using Eastwood with a library that they do not want to modify in that way, there could be some extra config file for Eastwood where one could specify these :eastwood-arglists.

Sound reasonable?





[JDBC-86] Provide better protocol-based extension point for setting parameters in a SQL statement Created: 26/Dec/13  Updated: 29/Dec/13  Resolved: 29/Dec/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

The current hooks are:

  • ISQLValue protocol based solely on the value, but still uses .setObject to add the parameter value
  • :set-parameters key in db-spec, to override the entire parameter setting mechanism

Marcus Brito suggested this: https://github.com/clojure/java.jdbc/pull/34

That's a cleaner extension point (that still preserves the basic ISQLValue hook) but I can't accept a PR as-is.



 Comments   
Comment by Sean Corfield [ 29/Dec/13 6:24 PM ]

Added ISQLParameter / set-parameter protocol / function.





[JDBC-85] Two unit tests do not test anything Created: 23/Dec/13  Updated: 23/Dec/13  Resolved: 23/Dec/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Minor
Reporter: Andy Fingerhut Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None

Attachments: File jdbc-85-v1.diff    

 Description   

The following two unit tests are of the form (is (= expr1) expr2), which can never fail, because (= expr1) is always true.

(is (= :K_W) (sql/as-keyword :K_W))
    (is (= :k.w) (sql/as-keyword :k.w))))

Found using a pre-release version of the Eastwood Clojure lint tool.



 Comments   
Comment by Andy Fingerhut [ 23/Dec/13 6:18 PM ]

Patch jdbc-85-v1.diff simply shuffles around the parens so that the tests are of the probably-intended form (is (= expr1 expr2))

Comment by Sean Corfield [ 23/Dec/13 6:27 PM ]

Thanx Andy!

Comment by Sean Corfield [ 23/Dec/13 6:37 PM ]

Patch applied.





[JDBC-65] Using execute! with no params to insert a row fails on SQLite Created: 09/Jul/13  Updated: 16/Dec/13  Resolved: 16/Dec/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

This is due to executeUpdate. Need to reverify the issues around using executeBatch with no params (since that looks like what db-do-commands does - which works - but executeUpdate was used as a workaround for an issue with no params at some point).

}}
user> (def sqlite-db {:subprotocol "sqlite"
                :subname "clojure_test_sqlite"})
#'user/sqlite-db
user> (clojure.java.jdbc/db-do-commands sqlite-db (ddl/create-table :test [:col :int]))
(0)
user> (clojure.java.jdbc/query sqlite-db ["select * from test"])
()
user> (execute! sqlite-db ["insert into test values (?)" 42])
(1)
user> (clojure.java.jdbc/query sqlite-db ["select * from test"])
({:col 42})
user> (execute! sqlite-db ["insert into test values (123)"])
nil
user> (clojure.java.jdbc/query sqlite-db ["select * from test"])
({:col 42})
user> (clojure.java.jdbc/insert! sqlite-db :test {:col 123})
({:last_insert_rowid() 2})
user> (clojure.java.jdbc/query sqlite-db ["select * from test"])
({:col 42} {:col 123})
{{



 Comments   
Comment by Sean Corfield [ 15/Sep/13 4:19 PM ]

Will deal with this during the beta of 0.3.0.

Comment by Sean Corfield [ 16/Dec/13 8:19 PM ]

This turned out to be a bug in how execute! handled empty param group sequences, but only SQLite was affected.





[JDBC-83] Test name test-update-or-insert-values appears twice in namespace clojure.java.test-jdbc Created: 15/Dec/13  Updated: 16/Dec/13  Resolved: 16/Dec/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Minor
Reporter: Andy Fingerhut Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

When the same name is used for more than one deftest statement, the first one's tests will never be run, reducing test coverage. Recommend changing the name of one of these two deftests



 Comments   
Comment by Sean Corfield [ 16/Dec/13 12:34 AM ]

Thanx Andy. This happened when I added a whole bunch of -with-db tests and then (just recently) started renaming them as part of the cleanup.

Comment by Sean Corfield [ 16/Dec/13 3:17 PM ]

Looks like it was just missed when I added the -with-isolation tests more recently.





[JDBC-84] Some JDBC drivers return a Boolean that is constructed and not identical to Clojure's true/false Created: 16/Dec/13  Updated: 16/Dec/13  Resolved: 16/Dec/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

After some experimentation, I've discovered that certain JDBC drivers return the equivalent of (Boolean. true) and (Boolean. false) which are truthy/falsey but sometimes produce surprising results since they are not actually true and false. Extending the result set reading protocol to Boolean to turn them into canonical values via if seems to do the trick.



 Comments   
Comment by Sean Corfield [ 16/Dec/13 2:25 PM ]

A simple (if v true false) is not sufficient for this - it needs to be (if (= true v) true false) based on tests I've conducted. The problem is that the constructed (Boolean. false) value behaves as truthy.

Comment by Sean Corfield [ 16/Dec/13 3:13 PM ]

Added Boolean implementation to IResultSetReadColumn.





[JDBC-82] Fix Android release build issue Created: 13/Dec/13  Updated: 13/Dec/13  Resolved: 13/Dec/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Zach Oakes Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None
Environment:

Ubuntu x64, OpenJDK 7


Attachments: Text File jdbc.clj.patch    
Patch: Code

 Description   

This patch fixes one remaining issue related to using this library in Android apps. While compiling for Android is now possible, there is an error thrown during release builds. This occurs during the "dexing" phase, when it converts from JVM to Dalvik bytecode.

It fails when it gets to the class file generated by the IResultSetReadColumn protocol definition. This is because it makes duplicate use of the "_" symbol to represent unused arguments. This patch simply renames the symbols so they are unique.



 Comments   
Comment by Sean Corfield [ 13/Dec/13 1:07 AM ]

Updated. Will be in rc2/gold.





[JDBC-75] Provide a way to pass options into a transaction Created: 15/Nov/13  Updated: 12/Dec/13  Resolved: 12/Dec/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

The obvious solution here is to allow keyword/value pairs in the db-transaction "binding" spec and pass those through to the Java transaction code.

Exact details TBD.



 Comments   
Comment by Sean Corfield [ 12/Dec/13 8:15 PM ]

Looks like just isolation level is supported?

http://docs.oracle.com/javase/6/docs/api/java/sql/Connection.html#setTransactionIsolation(int)

Comment by Sean Corfield [ 12/Dec/13 8:49 PM ]

with-db-transaction allows :isolation in the binding with values :read-committed, :read-uncommitted, :repeatable-read, :serializable





[JDBC-81] Deprecate db-transaction and add with-db-transaction for better consistency Created: 12/Dec/13  Updated: 12/Dec/13  Resolved: 12/Dec/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Comments   
Comment by Sean Corfield [ 12/Dec/13 8:11 PM ]

Added with-db-transaction. Deprecated db-transaction (it prints a deprecation warning). Updated code and tests to use with-db-transaction instead.





[JDBC-80] Add support for generic java.sql.DatabaseMetaData use now connection management is more explicit. Created: 11/Dec/13  Updated: 12/Dec/13  Resolved: 12/Dec/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Niels van Klaveren Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

In clojure.java.jdbc 0.2.3 it was easy to call methods on the connection, like .getDatabaseMetaData.

Now connection management has changed, and finding, reaching and keeping connections (and accompanying DatabaseMetaData) is a lot harder.

It would be good to have two new functions that would give access to all database metadata information in a minimalistic way, and keeps connection management in line with the new clojure.java.jdbc 0.3.0 ideals.

The first is a function that gets / sets up a connection by the new connection rules (find-, get-, add-connection), and provides a local binding for the DatabaseMetaData object resulting from calling .getDatabaseMetaData on that connection.
Of course, like a transaction, this also means the actions done within that binding needs a single connection that's kept open for the duration of the binding block.

The second is a function (or enhancement of result-set-seq) that calls result-set-seq if the argument is (instance? java.sql.ResultSet) or else the argument itself; this is because metadata method calls can either give back a resultset that needs to be clojurized (f.i. .getTables)or an object by themselves (f.i. .getDriverName). Let's call it meta-result as an example.

That would make things like the following possible:

(db-metadata
[m db]
(map
#(meta-result (.getIndexInfo m nil nil (% :table_name) false false))
(meta-result (.getTables m nil schema nil (into-array String ["TABLE"])))))

As far as I can see that's a pretty minimalistic set of functions that unlocks all DatabaseMetaData possibilities. Also, because it integrates with the new connection rules, it would integrate nicely with transactions, in case of queries that are dynamically generated on metadata within the same transaction.



 Comments   
Comment by Sean Corfield [ 11/Dec/13 11:00 AM ]

Nice suggestion. Thanx. with-db-metadata is probably more in line with the current naming and perhaps metadata-result to tie back to that name, rather than cause any confusion with Clojure's built-in meta?

Comment by Niels van Klaveren [ 11/Dec/13 11:27 AM ]

Glad you like it. For my use it was the only basic option still missing from clojure.java.jdbc.
The names sounds fine to me, I agree any possible confusion with core's meta should be evaded.

Comment by Sean Corfield [ 12/Dec/13 7:56 PM ]

Added with-db-metadata and metadata-result. Will be in RC1.





[JDBC-79] Add function to group multiple (nested) statements / method calls on one connection without commit / rollback mechanics Created: 11/Dec/13  Updated: 12/Dec/13  Resolved: 12/Dec/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Niels van Klaveren Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Currently, it is very easy to get spurious connection use on multiple statements,for instance by mapping a query function over a collection.
A workaround for this can be done by providing a binding through db-transaction.
However, this complects single connection use with commit / rollback mechanics that might be unwanted.

Another option is providing the binding yourself by wrapping it in a with-open/let construction.
This needs people to get into the implementation of the connection system, and might not play well with it.

Using connection pools is the last option, but usually is overkill for simple tasks.

Creating a macro and function that provides a simple binding for a connection in the way db-transaction does, without the included commit/rollback mechanics would do the trick. Such a function would play nice with the rest of the new connection propagation system (even including levels), and still doesn't rely on the rebinding of global vars.



 Comments   
Comment by Sean Corfield [ 11/Dec/13 10:55 AM ]

FYI: https://github.com/clojuredocs/guides/blob/master/articles/ecosystem/java_jdbc/home.md#reusing-connections

I'll probably add with-db-connection for this.

Comment by Sean Corfield [ 12/Dec/13 6:57 PM ]

Added with-db-connection. Will be in RC1.





[JDBC-77] Add ISQLValue protocol for conversion of Clojure values to SQL values Created: 02/Dec/13  Updated: 07/Dec/13  Resolved: 02/Dec/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

This should help people with Postgres or other exotic custom types in SQL.

Will need to add mapping of the protocol's function across sets of values.



 Comments   
Comment by Sean Corfield [ 02/Dec/13 1:06 AM ]

ISQLValue protocol added with default Object/nil implementations.

Comment by Wei Hsu [ 07/Dec/13 3:15 PM ]

Turns out that the :set-parameters integration point was enough for my needs at the moment. For now, I'm going to prioritize shipping my project over working on this protocol

Comment by Sean Corfield [ 07/Dec/13 4:10 PM ]

Cool. Glad that integration point works for you! It's not as elegant but it provides more flexibility because of the arguments it provides (the whole set).





[JDBC-78] clojure.java.jdbc.sql missing in [org.clojure/java.jdbc "0.3.0-beta2"] Created: 02/Dec/13  Updated: 02/Dec/13  Resolved: 02/Dec/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Justin Woo Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None


 Description   

Specifying [org.clojure/java.jdbc "0.3.0-beta2"] in project.clj, running any leiningen task throws this error:

Unable to find source-code formatter for language: terminal output. Available languages are: javascript, sql, xhtml, actionscript, none, html, xml, java
#<FileNotFoundException java.io.FileNotFoundException: Could not locate clojure/java/jdbc/sql__init.class or clojure/java/jdbc/sql.clj on classpath: >

Reverting to 0.3.0-beta1 works normally.



 Comments   
Comment by Justin Woo [ 02/Dec/13 3:32 PM ]

Never mind, I found http://corfield.org/blog/post.cfm/clojure-java-jdbc-0-3-0-beta2

I guess the code examples are just outdated on github...

How do I go about closing this issue?

Comment by Sean Corfield [ 02/Dec/13 4:28 PM ]

Where did you find code examples that still refer to that deprecated (& now removed) namespace?

Comment by Sean Corfield [ 02/Dec/13 4:30 PM ]

Oops! Right there on the README page... darn, thought I'd caught all of those...

Comment by Sean Corfield [ 02/Dec/13 4:38 PM ]

README has been updated!





[JDBC-74] Allow :user as alias for :username for consistency Created: 15/Nov/13  Updated: 02/Dec/13  Resolved: 02/Dec/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Only the :datasource format of connection uses :username - The URI format and any other that passes parameters thru uses :user instead.

Allow :user to be used with :datasource for consistency. Open question: should we deprecate :username at this point?



 Comments   
Comment by Sean Corfield [ 02/Dec/13 1:23 AM ]

Added :user as an alias for :username





[JDBC-70] create-table if-not-exists Created: 12/Oct/13  Updated: 21/Nov/13  Resolved: 21/Nov/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Jacob Rask Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None


 Description   

sqlite has the IF NOT EXISTS clause to CREATE TABLE, but there doesn't seem to be an easy way to use it through jdbc.

It is usually an error to attempt to create a new table in a database that already contains a table, index or view of the same name. However, if the "IF NOT EXISTS" clause is specified as part of the CREATE TABLE statement and a table or view of the same name already exists, the CREATE TABLE command simply has no effect (and no error message is returned). An error is still returned if the table cannot be created because of an existing index, even if the "IF NOT EXISTS" clause is specified.

https://sqlite.org/lang_createtable.html



 Comments   
Comment by Sean Corfield [ 03/Nov/13 9:40 PM ]

DROP TABLE supports this too (MySQL at least, no doubt others). DDL should provide a consistent way to add stuff like this.

Comment by Sean Corfield [ 21/Nov/13 11:12 PM ]

Based on recent feedback about the DSLs that crept into 0.3.0, I'm going to decline this and recommend efforts go toward adding DDL support to Justin Kramer's HoneySQL.

Comment by Sean Corfield [ 21/Nov/13 11:13 PM ]

Won't fix - DSLs are not growing here





[JDBC-73] Add ddl/create-primary-key and ddl/create-foreign-key Created: 05/Nov/13  Updated: 21/Nov/13  Resolved: 21/Nov/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Matt Oquist Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None

Attachments: Text File java-jdbc-keys.patch    
Patch: Code and Test

 Description   

I've attached a patch to add two new DDL fns to create primary keys and foreign keys. What do you think?



 Comments   
Comment by Sean Corfield [ 21/Nov/13 11:08 PM ]

Given recent discussions about the direction of java.jdbc and the DSLs that crept into 0.3.0, I'm going to close this as "won't fix" and direct efforts toward adding DDL support to HoneySQL (and we talked a bit about this at Clojure/conj).

Comment by Sean Corfield [ 21/Nov/13 11:10 PM ]

Won't fix - DSLs are not growing here...





[JDBC-76] Upgrading to any version past 0.3.0-alpha4 causes Exception with result_set_read_column Created: 20/Nov/13  Updated: 21/Nov/13  Resolved: 20/Nov/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Thomas Steffes Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None
Environment:

Mac OSX 10.9
Leiningen 2.3.3 on Java 1.7.0_15 Java HotSpot(TM) 64-Bit Server VM
Using this version of postgresql [postgresql "9.1-901-1.jdbc4"]



 Description   

If I upgrade to any version past 0.3.0-alpha4, I get this exception when attempting to run my ring server:

Exception in thread "main" java.lang.IllegalArgumentException: No single method: result_set_read_column of interface: clojure.java.jdbc.IResultSetReadColumn found for function: result-set-read-column of protocol: IResultSetReadColumn, compiling:(clojure/java/jdbc.clj:275:49)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6567)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyze(Compiler.java:6322)
at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:5708)
at clojure.lang.Compiler$FnMethod.parse(Compiler.java:5139)
at clojure.lang.Compiler$FnExpr.parse(Compiler.java:3751)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6558)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6548)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyze(Compiler.java:6322)
at clojure.lang.Compiler$InvokeExpr.parse(Compiler.java:3624)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6562)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyze(Compiler.java:6322)
at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:5708)
at clojure.lang.Compiler$FnMethod.parse(Compiler.java:5139)
at clojure.lang.Compiler$FnExpr.parse(Compiler.java:3751)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6558)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6548)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.access$100(Compiler.java:37)
at clojure.lang.Compiler$LetExpr$Parser.parse(Compiler.java:5973)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6560)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6548)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyze(Compiler.java:6322)
at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:5708)
at clojure.lang.Compiler$LetExpr$Parser.parse(Compiler.java:6009)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6560)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6548)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyze(Compiler.java:6322)
at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:5708)
at clojure.lang.Compiler$FnMethod.parse(Compiler.java:5139)
at clojure.lang.Compiler$FnExpr.parse(Compiler.java:3751)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6558)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6548)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.access$100(Compiler.java:37)
at clojure.lang.Compiler$DefExpr$Parser.parse(Compiler.java:529)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6560)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyze(Compiler.java:6322)
at clojure.lang.Compiler.eval(Compiler.java:6623)
at clojure.lang.Compiler.load(Compiler.java:7064)
at clojure.lang.RT.loadResourceScript(RT.java:370)
at clojure.lang.RT.loadResourceScript(RT.java:361)
at clojure.lang.RT.load(RT.java:440)
at clojure.lang.RT.load(RT.java:411)
at clojure.core$load$fn__5018.invoke(core.clj:5530)
at clojure.core$load.doInvoke(core.clj:5529)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invoke(core.clj:5336)
at clojure.core$load_lib$fn__4967.invoke(core.clj:5375)
at clojure.core$load_lib.doInvoke(core.clj:5374)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$load_libs.doInvoke(core.clj:5413)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$require.doInvoke(core.clj:5496)
at clojure.lang.RestFn.invoke(RestFn.java:436)
at eq_server.db$eval2811$loading_4910auto___2812.invoke(db.clj:1)
at eq_server.db$eval2811.invoke(db.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6619)
at clojure.lang.Compiler.eval(Compiler.java:6608)
at clojure.lang.Compiler.load(Compiler.java:7064)
at clojure.lang.RT.loadResourceScript(RT.java:370)
at clojure.lang.RT.loadResourceScript(RT.java:361)
at clojure.lang.RT.load(RT.java:440)
at clojure.lang.RT.load(RT.java:411)
at clojure.core$load$fn__5018.invoke(core.clj:5530)
at clojure.core$load.doInvoke(core.clj:5529)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invoke(core.clj:5336)
at clojure.core$load_lib$fn__4967.invoke(core.clj:5375)
at clojure.core$load_lib.doInvoke(core.clj:5374)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$load_libs.doInvoke(core.clj:5413)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$require.doInvoke(core.clj:5496)
at clojure.lang.RestFn.invoke(RestFn.java:457)
at eq_server.models.peek$eval2164$loading_4910auto___2165.invoke(peek.clj:1)
at eq_server.models.peek$eval2164.invoke(peek.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6619)
at clojure.lang.Compiler.eval(Compiler.java:6608)
at clojure.lang.Compiler.load(Compiler.java:7064)
at clojure.lang.RT.loadResourceScript(RT.java:370)
at clojure.lang.RT.loadResourceScript(RT.java:361)
at clojure.lang.RT.load(RT.java:440)
at clojure.lang.RT.load(RT.java:411)
at clojure.core$load$fn__5018.invoke(core.clj:5530)
at clojure.core$load.doInvoke(core.clj:5529)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invoke(core.clj:5336)
at clojure.core$load_lib$fn__4967.invoke(core.clj:5375)
at clojure.core$load_lib.doInvoke(core.clj:5374)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$load_libs.doInvoke(core.clj:5413)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$require.doInvoke(core.clj:5496)
at clojure.lang.RestFn.invoke(RestFn.java:551)
at eq_server.controllers.peeks$eval2133$loading_4910auto___2134.invoke(peeks.clj:1)
at eq_server.controllers.peeks$eval2133.invoke(peeks.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6619)
at clojure.lang.Compiler.eval(Compiler.java:6608)
at clojure.lang.Compiler.load(Compiler.java:7064)
at clojure.lang.RT.loadResourceScript(RT.java:370)
at clojure.lang.RT.loadResourceScript(RT.java:361)
at clojure.lang.RT.load(RT.java:440)
at clojure.lang.RT.load(RT.java:411)
at clojure.core$load$fn__5018.invoke(core.clj:5530)
at clojure.core$load.doInvoke(core.clj:5529)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invoke(core.clj:5336)
at clojure.core$load_lib$fn__4967.invoke(core.clj:5375)
at clojure.core$load_lib.doInvoke(core.clj:5374)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$load_libs.doInvoke(core.clj:5417)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$require.doInvoke(core.clj:5496)
at clojure.lang.RestFn.invoke(RestFn.java:482)
at eq_server.routes$eval2107$loading_4910auto___2108.invoke(routes.clj:1)
at eq_server.routes$eval2107.invoke(routes.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6619)
at clojure.lang.Compiler.eval(Compiler.java:6608)
at clojure.lang.Compiler.load(Compiler.java:7064)
at clojure.lang.RT.loadResourceScript(RT.java:370)
at clojure.lang.RT.loadResourceScript(RT.java:361)
at clojure.lang.RT.load(RT.java:440)
at clojure.lang.RT.load(RT.java:411)
at clojure.core$load$fn__5018.invoke(core.clj:5530)
at clojure.core$load.doInvoke(core.clj:5529)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invoke(core.clj:5336)
at clojure.core$load_lib$fn__4967.invoke(core.clj:5375)
at clojure.core$load_lib.doInvoke(core.clj:5374)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$load_libs.doInvoke(core.clj:5413)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$require.doInvoke(core.clj:5496)
at clojure.lang.RestFn.invoke(RestFn.java:457)
at user$eval5.invoke(form-init8113116856213476175.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6619)
at clojure.lang.Compiler.eval(Compiler.java:6608)
at clojure.lang.Compiler.load(Compiler.java:7064)
at clojure.lang.Compiler.loadFile(Compiler.java:7020)
at clojure.main$load_script.invoke(main.clj:294)
at clojure.main$init_opt.invoke(main.clj:299)
at clojure.main$initialize.invoke(main.clj:327)
at clojure.main$null_opt.invoke(main.clj:362)
at clojure.main$main.doInvoke(main.clj:440)
at clojure.lang.RestFn.invoke(RestFn.java:421)
at clojure.lang.Var.invoke(Var.java:419)
at clojure.lang.AFn.applyToHelper(AFn.java:163)
at clojure.lang.Var.applyTo(Var.java:532)
at clojure.main.main(main.java:37)
Caused by: java.lang.IllegalArgumentException: No single method: result_set_read_column of interface: clojure.java.jdbc.IResultSetReadColumn found for function: result-set-read-column of protocol: IResultSetReadColumn
at clojure.lang.Compiler$InvokeExpr.<init>(Compiler.java:3423)
at clojure.lang.Compiler$InvokeExpr.parse(Compiler.java:3630)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6562)
... 169 more



 Comments   
Comment by Sean Corfield [ 20/Nov/13 6:02 PM ]

Unable to reproduce - I have alpha5 in production so I know it works. I suspect you have old compiled code in your project: try lein clean and see if that fixes it.

Comment by Thomas Steffes [ 21/Nov/13 9:29 AM ]

Indeed that was the problem. Should have tried that before filing. Thanks!

Comment by Sean Corfield [ 21/Nov/13 11:15 AM ]

No problem. Glad it was that easy to fix





[JDBC-40] insert nulls using .setNull (e.g. for Teradata) Created: 12/Oct/12  Updated: 07/Nov/13  Resolved: 03/Nov/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Felix Andrews Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None
Environment:

Teradata 14


Attachments: Text File JDBC-40.patch    
Patch: Code

 Description   

Hi, I am using this nice package with a large (government) Teradata machine. I found that I can not insert nil values using insert-rows: the Teradata JDBC driver tells me to use .setNull or the form of .setObject including an SQL type code.

The following replacement definition of set-parameters was the only change needed to get this to work.

(defn- set-parameters
  "Add the parameters to the given statement."
  [^PreparedStatement stmt params]
  (let [pmd (.getParameterMetaData stmt)]
    (dorun
     (map-indexed
      (fn [ix value]
        (let [typ (.getParameterType pmd (inc ix))]
          (if (nil? value)
            (.setNull stmt (inc ix) typ)
            (.setObject stmt (inc ix) value typ))))
      params))))

I am not sure whether the use of .getParameterMetaData is a generally appropriate because the docs http://docs.oracle.com/javase/7/docs/api/java/sql/ParameterMetaData.html say:

For some queries and driver implementations, the data that would be returned by a ParameterMetaData object may not be available until the PreparedStatement has been executed.

Anyway it does work for Teradata. Maybe it could be an option?

It just occurred to me that it is probably inefficient to retrieve the parameter metadata on every call to set-parameters. It could be done in insert-rows instead. On the other hand, I didn't actually notice any performance hit when inserting 40,000 rows.



 Comments   
Comment by Sean Corfield [ 07/Apr/13 1:07 AM ]

This was a tricky one. Both .getParameterMetadataData and .getParameterType can throw exceptions. On MS SQL Server, if there is a syntax error in the SQL, the former throws an Exception. On MySQL, the latter throws an exception (every time as far as I can tell, even tho' .getParameterMetaData succeeds). That introduces quite a performance penalty for MySQL (about 10% in my tests).

However, if I treated MySQL as a special case and swallowed all exceptions and had a fallback to the original .setObject call, I could get everything to pass the test suite without a noticeable performance overhead (still measuring the latest MySQL performance as I write this).

Although it will probably slow everyone down a little, having to make these calls, it should be more robust and may make the .setObject calls faster in some situations?

Comment by Sean Corfield [ 07/Apr/13 1:24 AM ]

Fixed in 0.3.0-SNAPSHOT. May need to be revisited for performance tweaks later. Uncomfortable with MySQL being a special case here...

Comment by Sean Corfield [ 17/Apr/13 6:48 PM ]

This fix breaks PostgreSQL and already has MySQL as an exception so I'm going to roll it back and we'll have to find another way to support Teradata I'm afraid!).

Comment by Paudi Moriarty [ 22/Apr/13 4:26 AM ]

PostgreSQL also complains about use of setObject with a null object and no parameter type when using protocolVersion 3.

Seems to work ok with protocolVersion 2 though.

Comment by Felix Andrews [ 25/Jun/13 9:15 PM ]

How about some kind of option to activate the .setNull behaviour?
e.g. binding [*jdbc-set-null* true]?

Comment by Gary Fredericks [ 02/Aug/13 10:00 AM ]

Attached a patch that adds an ISQLParameter protocol that handles the .setObject call but can be overridden for custom types.

The most awkward part of this is that I had to implement the protocol for Object and nil (in order to cover everything), but the teradata case would require re-implementing for nil. As far as I can tell this would work fine, but overriding a protocol implementation feels sketchy.

I suppose an alternative would be to not provide any impls, and then check (satisfies? ISQLParameter x) and do the default behavior if not. Not sure if that would perform worse though.

Comment by Paudi Moriarty [ 02/Aug/13 10:09 AM ]

Beware that satisfies? is slow in the negative case. I'd avoid it here if possible.

Comment by Gary Fredericks [ 08/Oct/13 9:49 PM ]

Revisiting this since it keeps coming up at work.

I feel like the main downside to my patch's approach is that it precludes interacting with two different kinds of databases in the same process – if I want to connect to postgres in one function and have it write nulls one way, while writing nulls a different way to teradata, this isn't really possible.

The only other idea I have is passing a "setter" function as an option, which would do basically the same thing the protocol does in my patch.

Any ideas?

Comment by Sean Corfield [ 03/Nov/13 11:07 PM ]

Since this really is a per-db issue - and restricted to a handful of DBs (or fewer) - my sense is that we should do something with the db-spec itself to offer a way to override how set-parameters works. Having looked at the code, here's my proposal:

Wherever the code calls set-parameters in the context of a db-spec, it will check for a :set-parameters key in the db-spec and use that if present, else default to the normal set-parameters call.

This would allow you to add :set-parameters to your db-spec with a function that mimicked the built-in set-parameters except handled null values in a different way.

I'm going to make this change for 0.3.0-beta1 (coming real soon) and see how it works for you...

Comment by Sean Corfield [ 03/Nov/13 11:17 PM ]

Asserting "fixed" by virtue of allowing set-parameters to be overridden per-db-spec.

Comment by Felix Andrews [ 07/Nov/13 10:58 PM ]

Good solution. Many thanks!





[JDBC-63] Rename and expose db-with-query-results* Created: 26/Jun/13  Updated: 04/Nov/13  Resolved: 03/Nov/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Justin Kramer Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File resultset.patch    
Patch: Code

 Description   

I wanted to leverage java.jdbc but provide my own result-set-seq function. The only way to do that was to use alter-var-root, reach into private functions, and other such hacks.

The attached patch renames db-with-query-results* to db-with-result-set*, makes it public, and makes its func expect a ResultSet.

This separates result set processing from query execution. I.e., I can ask the library to execute some SQL against a DB spec, return a ResultSet, and then I'm free to process it as I see fit.

All tests pass.



 Comments   
Comment by Sean Corfield [ 26/Jun/13 5:27 PM ]

I can't say I like that approach. I really don't want to expose the ...* implementation functions. Can you explain what you needed to do differently in your result-set-seq function? Making that extensible by some other approach seems like a better way to tackle this...

Comment by Justin Kramer [ 26/Jun/13 5:48 PM ]

Pasting from our chat on IRC:

<jkkramer> I was initially thinking that (I have a use case in mind I'll explain) but thought this approach allowed for maximum flexibility for callers
<jkkramer> my particular use case is I wanted custom column labels
<jkkramer> which could be accomplished with a custom-label keyword arg or similar to result-set-seq, which is a function that takes rsmeta and idx
<jkkramer> however – i also wanted to be able to return additional per-column metadata such as the table name, which could be used later on to e.g. create nested hash map results. it starts getting complicated to rely on result-set-seq. being able to write my own ResultSet-processing function would be nice
<jkkramer> but i don't want to have to bother with connections, prepared statements, etc. just ResultSet processing

Comment by Sean Corfield [ 15/Sep/13 3:04 PM ]

FWIW, I just ran into a situation where I would like to be able to return a raw ResultSet from query so I'll have to give this some thought. Won't make 0.3.0 tho'...

Comment by Justin Kramer [ 16/Sep/13 6:55 AM ]

Just to reinforce the point: I don't see it as exposing implementation details so much as exposing the host – i.e., JDBC building blocks. Being able to process a ResultSet without having to manage the connection details would be extremely useful to me. I can make do until 0.4.0+ though.

Comment by Sean Corfield [ 03/Nov/13 10:41 PM ]

I've decided to incorporate a variant of this (since I was in messing with the guts of the query function anyway).

db-with-query-results* is now a public function called db-query-with-resultset but otherwise does exactly what you wanted, based on your patch.

Comment by Justin Kramer [ 04/Nov/13 2:15 PM ]

Excellent! Thanks, Sean. I'll be taking beta1 for a spin shortly.





[JDBC-69] Allow use on Android Created: 20/Sep/13  Updated: 03/Nov/13  Resolved: 03/Nov/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Zach Oakes Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File jdbc.clj.patch    
Patch: Code

 Description   

This patch uses the when-available macro from match.numeric-tower to check for the existence of javax.naming.InitialContext before using it. Since the javax.naming.* namespace is not available on Android, its current use prevents this library from running on the platform.






[JDBC-72] Improve laziness with as-arrays? Created: 03/Nov/13  Updated: 03/Nov/13  Resolved: 03/Nov/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 1
Labels: None


 Description   

Currently, query uses vec with as-arrays? so the result set is fully realized before result-set-fn is called. This isn't ideal.

A useful improvement would be to default result-set-fn to vec if as-arrays? else to doall (as currently) and remove the vec call in the conditional argument to result-set-fn.

This would not change the default behavior for as-arrays? true/false but would allow users to override result-set-fn to leverage lazy result set fetching even when using as-arrays? true.



 Comments   
Comment by Sean Corfield [ 03/Nov/13 9:39 PM ]

Will be in alpha6/beta1





[JDBC-71] CLJ-1252 (Clojure 1.6) invalidates keywords that start with digits Created: 28/Oct/13  Updated: 28/Oct/13  Resolved: 28/Oct/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

A ResultSet can contain pretty much any keys. As of CLJ-1252 being applied, keywords are more restricted now. It's not clear what should be done with keys in a ResultSet that cannot be converted to keywords... left as strings perhaps?



 Comments   
Comment by Sean Corfield [ 28/Oct/13 4:49 PM ]

Tests updated to use (keyword "1") instead of :1.





[JDBC-37] Provide support for alternate transaction strategies Created: 31/Jul/12  Updated: 15/Sep/13

Status: Open
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Jim Crossley Assignee: Sean Corfield
Resolution: Unresolved Votes: 0
Labels: None

Attachments: File jdbc-37.diff    

 Description   

The current design of java.jdbc prevents its use as a participant in a distributed XA transaction, where transactional control is delegated to a TransactionManager. It only works with local transactions and absorbs all nested transactions into the outermost one. It'd be nice to have a clean way to override this default behavior.



 Comments   
Comment by Jim Crossley [ 31/Jul/12 5:49 PM ]

I'll try to work up a straw-man solution and submit a pull-request.

Comment by Sean Corfield [ 31/Jul/12 5:59 PM ]

Thanx Jim. I agree the current setup isn't ideal in that area. As for "pull-request", I assume you mean a patch attached to this JIRA ticket (since Clojure and contrib projects cannot accept pull requests). Please also make sure you get a Contributor's Agreement on file per http://clojure.org/contributing

Comment by Jim Crossley [ 31/Jul/12 6:25 PM ]

Sure thing, Sean

Comment by Jim Crossley [ 02/Aug/12 6:31 PM ]

Sean, here's a first whack introducing a new dynamic var for transaction strategy.

I know the desire is for a new API with explicit parameter passing, and when that vision congeals, I'm happy to help migrate, but I'd like to always have the option of the dynamic binding as well.

My thinking is that if a tx strategy function is passed as a parameter, it'll override whatever may be set in the dynamic var, but how it gets passed is still unclear to me. I considered adding an optional key to the db-spec, but wanted to run that by you first.

The Agreement is in the mail. I appreciate your feedback.

Comment by Sean Corfield [ 07/Apr/13 3:46 AM ]

There have been a lot of code changes lately and this patch no longer applies cleanly. Can you submit a new patch against the latest master? Thanx!

Comment by Jim Crossley [ 30/Apr/13 5:43 PM ]

Sean, I'm not sure I'm totally smitten with the new "transaction?" boolean parameter. At first glance, this seems an awkward way to define a transaction consisting of multiple statements. Can you provide an example usage with the new API of say, inserting, updating and deleting data within a single transaction? I'm hoping an example will clear up my confusion and I can propose a way of parameterizing a particular strategy for executing any transaction.

Comment by Sean Corfield [ 30/Apr/13 5:47 PM ]

See "Using Transactions" here https://github.com/clojure/java.jdbc/blob/master/doc/clojure/java/jdbc/UsingSQL.md

Comment by Jim Crossley [ 30/Apr/13 6:30 PM ]

Yes, I saw that, and it seemed to confirm that my original patch should work with minor tweaking. And then I was surprised to see the "transactional?" option in the source. I was curious how you expect it to be used.

Comment by Sean Corfield [ 30/Apr/13 6:58 PM ]

Folks have asked for the ability to run various functions without an implicit transaction inside them - in fact for some DBs, certain commands cannot be run inside a transaction which was a problem with the old API where you couldn't turn that off. It allows users to have more explicit control over transactions and it's also a convenient "implementation artifact" for nesting calls.

So, bottom line: I expect very few users to actually use it explicitly, unless they specifically need to turn off the implicit transaction wrapping.

And for most of the API that users will interact with, they don't even need to worry about it.

Does that help?

Comment by Sean Corfield [ 30/Apr/13 7:04 PM ]

Addressing your question about your patch: Clojure/core specifically wanted java.jdbc to move away from dynamically bound variables, which the new API / implementation achieves (given that all the old API that depends on dynamic-vars is deprecated now and will be completely removed before 1.0.0).

If all you need is the ability to specify how the transaction function does its job, via a HOF, then I'll have a look at what that would take in the context of the new 'world'...

Comment by Jim Crossley [ 30/Apr/13 7:33 PM ]

Regarding dynamically bound variables, I think it's very common and accepted – even canonical? – to use them (or some ThreadLocal-like variant) to implement transactions. I would hate to make the api awkward just to avoid them.

But to answer the core question, yes, I think it's important to provide an alternative to the assumptions encoded into db-transaction*, e.g. "Any nested transactions are absorbed into the outermost transaction." I might prefer a strategy in which a nested transaction suspends the current one and creates another, assuming the driver supports it.

But my primary reason for this, as you know, is to somehow inject a "null strategy" to support distributed transactions, delegating the commit/rollback choice to an external "transaction manager".

One question: what do you mean by "implementation artifact" for nesting calls?

Comment by Jim Crossley [ 01/May/13 9:46 AM ]

Sean, how do you feel about turning the :transactional? option to a function instead of a boolean? And that function represents the :tx-strategy used, which could assume the value of a dynamically bound value *tx-strategy* by default, and its value would be db-transaction* by default. And folks could set it to nil to turn off transactions, i.e. :tx-strategy nil or perhaps :tx-strategy :none would equate to :transactional? false. I think that may satisfy core's recommendation for dynamic variables not being the only way to alter behavior. Make sense at all?

Comment by Sean Corfield [ 01/May/13 10:57 AM ]

I was looking at the code again last night and came to much the same conclusion! I'll take a run at that this weekend (but I'm not adding a dynamic variable - Clojure/core were very clear about their reasons for not wanting those in code except in extremely rare situations in code that is guaranteed to be single-threaded).

Comment by Jim Crossley [ 01/May/13 11:22 AM ]

You're killing me!

Without the dynamic var, I can't see any way to transparently allow the db code to participate in a distributed transaction. Can we at least agree that transactional code is guaranteed to be effectively single-threaded? And by this I mean that a transaction must be associated with a single connection, so any thread using that connection must have exclusive access. Do you really want to force folks using distributed transactions to pass the tx strategy in with every call? I don't think adding the dynamic var and the option to override it violates the spirit of this guideline: "If you present an interface that implicitly passes a parameter via dynamic binding (e.g. db in sql), also provide an identical interface but with the parameter passed explicitly."

What was the specific feedback you received that contradicts that?

Comment by Jim Crossley [ 03/May/13 6:44 AM ]

Sean, I came up with a different solution for using java.jdbc with an XA connection that obviates this issue. So even though I think it's useful to provide both a dynamic var and a function option as a means to override the logic of db-transaction*, I no longer have a need for it.

Keep up the good work on java.jdbc!

Comment by Sean Corfield [ 03/May/13 12:40 PM ]

I like problems that go away of their own accord but I still like the idea of making the transaction strategy a function so I'll look at that anyway as a possible (breaking) change for alpha2.

Comment by Jim Crossley [ 03/May/13 12:50 PM ]

Something else you might consider: define a protocol function that encapsulates your commit/rollback/setAutoCommit logic inside db-transaction* and extend it to java.sql.Connection. That way, folks could extend their more specific types, e.g. XAConnection, to your protocol (and avoid making those calls that aren't allowed by XA).

Comment by Sean Corfield [ 15/Sep/13 9:41 PM ]

commentHaving spent some time looking at the transaction-as-function option, I don't think that's a great idea - partly because I'm not sure what alternative functions would look like. Jim's suggestion of a protocol for the internal transaction logic seems like a good one but at this point I'm not familiar enough with alternative strategies to know exactly how the protocol should look (and which parts of the internal db-transaction* logic should be implemented that way) so I'm going to punt on this for 0.3.0 but leave it open for the future.





[JDBC-1] Provide option to return SQL generated / execution stats Created: 08/May/11  Updated: 15/Sep/13

Status: Open
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Shantanu: Provide a mechanism to show the SQL being executed (configurable, so that it can be turned off)

Sean: Good idea. Even better, a way to access statistics about the prepared statement after execution - timing etc?

Shantanu: Yes, that would be an add-on value to show how are the queries performing.



 Comments   
Comment by Sean Corfield [ 15/Sep/13 4:21 PM ]

Post 0.3.0





[JDBC-48] Support stored procedures with CallableStatement Created: 15/Mar/13  Updated: 15/Sep/13

Status: Open
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Jeremy Heiler Assignee: Sean Corfield
Resolution: Unresolved Votes: 0
Labels: None


 Description   

JDBC's CallableStatement provides support for calling stored procedures. More specifically, it allows you to register OUT parameters which will become the statements (possibly many) ResultSet objects. A CallableStatement is a PreparedStatement, so I am hoping there wont be too much involved with regard to executing them. The main difference is being able to register and consume OUT parameters.

I'll be hacking on this, so patches are forthcoming. Any input is appreciated.



 Comments   
Comment by Sean Corfield [ 15/Mar/13 10:51 PM ]

I've never used stored procs (I don't like the complexity that I've seen them add to version control, change management and deployment) so I'm afraid I can't offer any input - but I really appreciate you taking this on! Thank you!

Comment by Sean Corfield [ 15/Sep/13 4:20 PM ]

Post 0.3.0. See also JDBC-64.





[JDBC-56] Problem with find-connection when using agents with c3p0 connection pool Created: 16/May/13  Updated: 15/Sep/13  Resolved: 15/Sep/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Minor
Reporter: Ed O'Loughlin Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None
Environment:

OS X 10.8.3
Java 1.6.0_45
Clojure 1.5
c3p0 0.9.1.2
java.jdbc 0.2.1 (the problem also seems to be there in HEAD at the time of logging this)



 Description   

I use an agent to offload email processing and update a db on completion. If I use a c3p0 connection pool, I get an exception and a complaint that the connection isn't open (see stacktrace). The code works if I wrap the agent fn in a clojure.java.jdbc/with-connection and get a connection from the pool in the agent thread.

The in jdbc.clj, the find-connection (db-find-connection in HEAD) checks for an existing connection and tries to use it without checking if it's open. This causes an exception with a c3p0 connection that was acquired in another thread.

Here's the stacktrace I'm seeing (note the "You can't operate on a closed Connection!!!"):

2013-05-16 10:32:22,208 [clojure-agent-send-off-pool-7] DEBUG com.mchange.v2.sql.SqlUtils - Converting Throwable to SQLException...
java.lang.NullPointerException
	at com.mchange.v2.c3p0.impl.NewProxyConnection.prepareStatement(NewProxyConnection.java:186)
	at clojure.java.jdbc$prepare_statement.doInvoke(jdbc.clj:450)
	at clojure.lang.RestFn.invoke(RestFn.java:425)
	at clojure.lang.AFn.applyToHelper(AFn.java:163)
	at clojure.lang.RestFn.applyTo(RestFn.java:132)
	at clojure.core$apply.invoke(core.clj:621)
	at clojure.java.jdbc$with_query_results_STAR_.invoke(jdbc.clj:646)
	at clj_record.core$find_by_sql$func__3362__auto____3378.invoke(core.clj:72)
	at clj_record.core$find_by_sql.invoke(core.clj:71)
	at clj_record.core$find_records.invoke(core.clj:85)
	at clj_record.core$find_record.invoke(core.clj:91)
	at neataudio.domain.interview_participant_relationship$find_record.invoke(interview_participant_relationship.clj:18)
	at neataudio.domain.interview_participant_relationship$update_email_status.invoke(interview_participant_relationship.clj:37)
	at neataudio.controllers.participant_controller$notify$fn__5113$fn__5114$func__3362__auto____5115$fn__5116.invoke(participant_controller.clj:104)
	at clojure.java.jdbc$transaction_STAR_$fn__3162.invoke(jdbc.clj:372)
	at clojure.java.jdbc$transaction_STAR_.invoke(jdbc.clj:371)
	at neataudio.controllers.participant_controller$notify$fn__5113$fn__5114$func__3362__auto____5115.invoke(participant_controller.clj:101)
	at neataudio.controllers.participant_controller$notify$fn__5113$fn__5114.invoke(participant_controller.clj:101)
	at neataudio.controllers.participant_controller$notify$fn__5113.invoke(participant_controller.clj:101)
	at clojure.lang.AFn.applyToHelper(AFn.java:185)
	at clojure.lang.AFn.applyTo(AFn.java:151)
	at clojure.core$apply.invoke(core.clj:623)
	at clojure.core$binding_conveyor_fn$fn__4115.doInvoke(core.clj:1848)
	at clojure.lang.RestFn.applyTo(RestFn.java:146)
	at clojure.lang.Agent$Action.doRun(Agent.java:114)
	at clojure.lang.Agent$Action.run(Agent.java:163)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
	at java.lang.Thread.run(Thread.java:680)
java.sql.SQLException: You can't operate on a closed Connection!!!
	at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:106)
	at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:65)
	at com.mchange.v2.c3p0.impl.NewProxyConnection.prepareStatement(NewProxyConnection.java:222)
	at clojure.java.jdbc$prepare_statement.doInvoke(jdbc.clj:450)
	at clojure.lang.RestFn.invoke(RestFn.java:425)
	at clojure.lang.AFn.applyToHelper(AFn.java:163)
	at clojure.lang.RestFn.applyTo(RestFn.java:132)
	at clojure.core$apply.invoke(core.clj:621)
	at clojure.java.jdbc$with_query_results_STAR_.invoke(jdbc.clj:646)
	at clj_record.core$find_by_sql$func__3362__auto____3378.invoke(core.clj:72)
	at clj_record.core$find_by_sql.invoke(core.clj:71)
	at clj_record.core$find_records.invoke(core.clj:85)
	at clj_record.core$find_record.invoke(core.clj:91)
	at neataudio.domain.interview_participant_relationship$find_record.invoke(interview_participant_relationship.clj:18)
	at neataudio.domain.interview_participant_relationship$update_email_status.invoke(interview_participant_relationship.clj:37)
	at neataudio.controllers.participant_controller$notify$fn__5113$fn__5114$func__3362__auto____5115$fn__5116.invoke(participant_controller.clj:104)
	at clojure.java.jdbc$transaction_STAR_$fn__3162.invoke(jdbc.clj:372)
	at clojure.java.jdbc$transaction_STAR_.invoke(jdbc.clj:371)
	at neataudio.controllers.participant_controller$notify$fn__5113$fn__5114$func__3362__auto____5115.invoke(participant_controller.clj:101)
	at neataudio.controllers.participant_controller$notify$fn__5113$fn__5114.invoke(participant_controller.clj:101)
	at neataudio.controllers.participant_controller$notify$fn__5113.invoke(participant_controller.clj:101)
	at clojure.lang.AFn.applyToHelper(AFn.java:185)
	at clojure.lang.AFn.applyTo(AFn.java:151)
	at clojure.core$apply.invoke(core.clj:623)
	at clojure.core$binding_conveyor_fn$fn__4115.doInvoke(core.clj:1848)
	at clojure.lang.RestFn.applyTo(RestFn.java:146)
	at clojure.lang.Agent$Action.doRun(Agent.java:114)
	at clojure.lang.Agent$Action.run(Agent.java:163)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
	at java.lang.Thread.run(Thread.java:680)
Caused by: java.lang.NullPointerException
	at com.mchange.v2.c3p0.impl.NewProxyConnection.prepareStatement(NewProxyConnection.java:186)
	... 28 more


 Comments   
Comment by Ed O'Loughlin [ 16/May/13 4:50 AM ]

Apologies: this is a crappy bug report. I'm under a bit of pressure at the moment so I can't make a test case.

Comment by Sean Corfield [ 21/May/13 12:36 PM ]

If you use the new API, you pass the db-spec into the functions and it will take care of getting the connection, using it, and closing it - and that will all happen on the same (agent) thread. I would not expect you to be able to get a connection in one thread and use it from another thread.

When you get a chance, please post an example of both the working and non-working code, but my initial thinking is this isn't a bug, just an inherent limitation of working with connections across threads....?

Comment by Sean Corfield [ 15/Jun/13 9:04 PM ]

Ed, are you going to get a chance to post an example of working / non-working code per my request a month ago?

I'll leave this open until the end of June but without further information, I'm going to assert it works if used as designed and close this as not a bug.

Comment by Sean Corfield [ 15/Sep/13 4:17 PM ]

In the absence of better information, I'm closing this "works as designed" and caution that people need to be more careful when using connections across multiple threads.





[JDBC-68] Optional 'identifer' kwarg should be responsible to keywordize row keys Created: 14/Aug/13  Updated: 15/Sep/13  Resolved: 15/Sep/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Shantanu Kumar Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: enhancement
Environment:

Clojure (JVM)


Attachments: File identifier-should-keywordize.diff    
Patch: Code

 Description   

Optional 'identifier' kwarg only transforms the string column name to another string; the row keys are always keywordized. In some result parsing cases, where a different transformation may be required the default keywordization would be unnecessary.

A possible solution is to expand the responsibility of 'identifier' to keywordize (default behavior) when required. This solution is attached as a patch to this issue, and passes all tests.



 Comments   
Comment by Sean Corfield [ 14/Aug/13 3:34 PM ]

The problem with this approach is that anyone using the `identifiers` approach - overriding the default behavior - will now have broken code. I'll give it more thought. I think a :keyword? true|false argument, passed thru everything is the only way to maintain backward compatibility but that's a lot of churn so breaking backward compatibility may be the less painful approach (esp. since 0.3.0 is a whole new API anyway).

Comment by Sean Corfield [ 15/Sep/13 4:15 PM ]

java.jdbc has always converted column identifiers to keywords and having keys in maps be keywords is idiomatic in Clojure. If some folks want non-keywords, they can post-process the results.

Adding another option to turn keywords on and off will make the API more complex and breaking compatibility by suddenly not turning the column names into keywords, and forcing a lot of existing code to change, is not acceptable at this point.





[JDBC-60] insert! is not documented to always return a sequence of generated keys Created: 26/May/13  Updated: 15/Sep/13  Resolved: 15/Sep/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Trivial
Reporter: Christian Sperandio Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None
Environment:

OS X Clojure 1.5.1 clojure.jdbc 0.3-alpha4



 Description   

The insert! function always returns a sequence of keys even if only one recored is inserted.



 Comments   
Comment by Sean Corfield [ 26/May/13 9:58 AM ]

I think this is more consistent than returning a sequence sometimes and a single value other times. If you want the first key, you know you're always safe to call `first` on the result. I'll probably just update the docstring to make that clearer what happens for different circumstances.

Comment by Christian Sperandio [ 26/May/13 10:21 AM ]

I agree with you about the consistency to return always a sequence.
I was afraid about a future change (closer of the insert-record function) that would break the current behavior.

Comment by Sean Corfield [ 15/Sep/13 4:12 PM ]

This is a documentation issue and the documentation is now editable by the community in its new home on http://clojure-doc.org





[JDBC-66] Permit non-keyword labels Created: 17/Jul/13  Updated: 15/Sep/13  Resolved: 15/Sep/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Kyle Cordes Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None


 Description   

This line of code in jdbc.clj:

(map (comp keyword identifiers)))

... made sense before the newish as-arrays? feature. The results were always maps, and with maps it is idiomatic to use keywords as keys.

But when returning the results as nested lists, the first entry contains the field identifiers; these might very reasonably be something other than keywords.

I suggest that when the user passes in their own function to translate from fields to keys, that the user be able to make them keys whatever they like, perhaps plain strings or perhaps somethings else.

As a workaround, of one can just let this code keywordize the information, then unkeywordize it outside afterward.



 Comments   
Comment by Sean Corfield [ 15/Sep/13 4:10 PM ]

Given the workaround is easy - and the behavior to-date is well-known and well-documented - I'm not going to complicate the machinery to support non-keywords in array results here.





[JDBC-61] Add left and right join to DSL (Patch attached) Created: 16/Jun/13  Updated: 15/Sep/13  Resolved: 15/Sep/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sebastian Hennebrueder Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None

Attachments: Text File 0001-add-left-and-right-join-to-DSL.patch    
Patch: Code and Test

 Comments   
Comment by Sean Corfield [ 15/Sep/13 4:07 PM ]

Rather than expand the deliberately minimal DSL for SQL generation, I'd rather point people to HoneySQL.

The only area I might consider expanding the DSL is for the where clause to support more than just key = value but even there I'd rather direct folks to HoneySQL.





[JDBC-53] create-table & drop-table doesn't honor naming strategy Created: 24/Apr/13  Updated: 15/Sep/13  Resolved: 15/Sep/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Roman Scherer Assignee: Sean Corfield
Resolution: Completed Votes: 1
Labels: None


 Description   

create-table & drop-table doesn't take the current naming strategy into account.



 Comments   
Comment by Sean Corfield [ 24/Apr/13 11:08 AM ]

Just to note: the naming strategy stuff, based on dynamically bound globals, is all deprecated in 0.3.0. Alternatives to create-table and drop-table, which respect the new entities function approach, will be provided in an upcoming alpha build of 0.3.0.

Comment by David James [ 11/May/13 8:51 PM ]

Any update on this? I noticed the deprecations while reading the code. Could you say a few words or add a few docs on the 0.3 way of doing naming strategies?

Comment by Sean Corfield [ 11/May/13 11:20 PM ]

The replacement is entities and identifiers (as keyword arguments in java.jdbc functions and as macros in java.jdbc.sql). You can see some examples here:

http://clojure.github.io/java.jdbc/doc/clojure/java/jdbc/NameMapping.html

Replacements for create-table / drop-table will be added before beta1. I'm not sure right now how many more alphas we'll have. There's a lot of churn in the code right now and I want it to be feature complete and fairly settled before the first beta.

Comment by Matt Oquist [ 12/May/13 8:56 PM ]

Thanks for the update; looking forward to it!

Comment by Sean Corfield [ 15/Sep/13 3:39 PM ]

Will be in alpha5





[JDBC-39] Boolean - BIT(1) - columns are returned as byte[] via some DB connections Created: 10/Sep/12  Updated: 06/Aug/13  Resolved: 12/Sep/12

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None
Environment:

Specifically using the MySQL connector 5.0.x or 5.1.x against a Tungsten Connector proxy



 Description   

The root cause appears to be that rs.getObject( i ) seems to take the field flags into effect over the basic java.sql.Types values.

In this case a BIT(1) column comes back from MySQL as type 16 (BOOLEAN), flags UNSIGNED. From Tungsten it comes back as type 16, flags BINARY BLOB UNSIGNED.

The resultset-seq construction code is pretty cavalier about using just getObject() instead of more specific types so this might be the start of a smarter approach to that.



 Comments   
Comment by Sean Corfield [ 10/Sep/12 5:51 PM ]

Further update: this looks to be a configuration or version issue with the Tungsten Connector setup as we've been able to repro the problem in one environment but it works as expected in a newer environment.

May still go ahead with this change anyway since I believe it's a good change for robustness, but it looks lower priority right now.

Comment by Sean Corfield [ 10/Sep/12 6:51 PM ]

Pushed a fix for this but would like more hands on this in SNAPSHOT form before declaring it resolved.

Comment by Sean Corfield [ 10/Sep/12 7:46 PM ]

Tungsten's Connector defeats this fix. Even tho' the fields have an underlying mysqlType value of 16 (BOOLEAN), (.getColumnType) returns -3 (byte[] presumably) so Boolean is not forced as I had hoped.

Comment by Sean Corfield [ 12/Sep/12 7:51 PM ]

The underlying behavior is a quirk of the Tungsten Connector configuration which we believe we've successfully corrected. Given the result of (.getColumnType) is -3, I'm not sure anything could be done in code to address this, so I'm going to resolve this as invalid.

Comment by Sean Corfield [ 12/Sep/12 7:52 PM ]

Not a code issue.

Comment by Sean Corfield [ 06/Aug/13 6:16 PM ]

In case anyone else runs into this with Tungsten: BIT fields are handled in a version-specific way and the logic assumes three segments to the version number. Tungsten's default configuration has only two segments in the version number so the BIT handling logic barfs. The workaround is to change the Tungsten configured version to match the actual MySQL version you are proxying (e.g., 5.5.30-percona instead of 5.1-tungsten).





[JDBC-67] Should the default result-set-fn be vec instead of doall? Created: 01/Aug/13  Updated: 01/Aug/13  Resolved: 01/Aug/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Gary Fredericks Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None


 Description   

Rationale: obviously we want eagerness already, so the only difference here is what data structure we use. A vector has the advantage of being chunked when processed as a seq, which is probably the normal usage?

Happy to patch if there are no objections.



 Comments   
Comment by Sean Corfield [ 01/Aug/13 11:21 AM ]

Most code out there is going to treat the result as a seq so I'd be concerned about potential overhead in changing from essentially `(seq (doall (map ...)))` to `(seq (vec (map ...)))`. Can you persuade me that's not a real issue?

Comment by Gary Fredericks [ 01/Aug/13 11:29 AM ]

It occurred to me that maybe a more efficient version of this would be the ability to (maybe not by default) construct a chunked-seq directly (in the lower-level manner) in the result-set-seq function. Would this address your objections?

Comment by Gary Fredericks [ 01/Aug/13 11:30 AM ]

I think the only reason to avoid that by default is the danger (is it really a problem?) of reading more records than the user needs. But it might be that the benefits of chunked seqs outweigh that risk, and just providing an option to unchunk would be sufficient.

Comment by Sean Corfield [ 01/Aug/13 12:03 PM ]

That sounds like over-complication to me. Is it solving a real problem that exists today with the `doall` approach?

Comment by Sean Corfield [ 01/Aug/13 12:06 PM ]

And it also occurred to me that if someone actually wants a vector result, they can just specify `:result-set-fn vec` directly. So it's easy to get the behavior you want without changing the behavior for all that existing code out there (I know, not much code relies on the new API yet, but I have it in production at World Singles so I don't want it to silently change under me at this point - we'd have to do a lot of testing to ensure it didn't adversely affect performance and/or stability).

Comment by Sean Corfield [ 01/Aug/13 12:06 PM ]

If you want a vec, specify it for :result-set-fn. At this point I don't want to change the default when there is existing production code depending on the old behavior.

Comment by Gary Fredericks [ 01/Aug/13 12:31 PM ]

I don't want a vector in particular, I want a chunked seq so that subsequent processing is faster. If we go with just the chunked-seq approach (in result-set-seq) I think there's a very small chance of any noticeable behavior change.

My hunch is that most uses of java.jdbc would see some speedup as a result of chunking.

Comment by Sean Corfield [ 01/Aug/13 12:42 PM ]

If you can provide a benchmark that proves that - and a separate patch under a separate ticket - I'd look at that more favorably.

Comment by Gary Fredericks [ 01/Aug/13 12:45 PM ]

Sounds good, thanks.





[JDBC-23] Prepared statements' parameters don't get applied Created: 21/Nov/11  Updated: 26/Jul/13  Resolved: 22/Nov/11

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Ghadi Shayban Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None
Environment:

Oracle JDBC, probably applies universally


Attachments: Text File param.patch    
Patch: Code and Test

 Description   

Basically a goof line in the code.
If you pass a PreparedStatement to with-query-results, it chops off too many items from the param list, whereas if you pass in straight SQL, it's ok.

Deleted the offending line



 Comments   
Comment by Sean Corfield [ 22/Nov/11 1:01 AM ]

Thanx Ghadi.





[JDBC-27] Use repeat instead of replicate Created: 29/Feb/12  Updated: 26/Jul/13  Resolved: 29/Feb/12

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Jonas Enlund Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None

Attachments: File repeat-patch.diff    
Patch: Code and Test

 Description   

The replicate function is deprecated and the suggested alternative is to use repeat.



 Comments   
Comment by Sean Corfield [ 29/Feb/12 5:34 PM ]

Thanx Jonas. This made me look thru all my other Clojure code but it was the only instance of replicate that I could find.





[JDBC-59] Support Postgres in the test suite Created: 25/May/13  Updated: 26/Jul/13  Resolved: 26/May/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Dan Dillinger Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: patch, test

Attachments: File postgres-test-support.diff    

 Description   

The README for c.j.j currently indicates postgres support with instructions on how to run the tests against it; however, the tests fail if you do. There are two reasons:

  • float-or-double needs a postgres case or it fails because of equality classes
  • the postgresql jdbc driver's .getGeneratedKeys() implementation returns all columns in the row, not just the auto-generated keys.

A patch is attached: postgres-test-support.diff (05/25/2013)



 Comments   
Comment by Sean Corfield [ 26/May/13 10:19 AM ]

Applied patch. Thanx!





[JDBC-62] creating indices - patch? Created: 20/Jun/13  Updated: 27/Jun/13  Resolved: 27/Jun/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Matt Oquist Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None

Attachments: File java-jdbc-ddl-indices.diff     File java-jdbc-ddl-indices-v2.diff    

 Description   

java.jdbc currently has no support for creating and deleting indices. I wrote create-index-ddl for my own project to handle this; I could work that into a patch if that would be helpful. I know you're re-working the global DB connection right now, so if you'd like a patch, but would like me to wait until the code has settled down, that's fine, too.



 Comments   
Comment by Sean Corfield [ 20/Jun/13 10:55 PM ]

Thanx Matt. Definitely interested. DDL is an area the library is weak in and I'm thinking of adding clojure.java.jdbc.ddl as a namespace dedicated to DDL-generating functions, if you'd like to kick that off?

Comment by Matt Oquist [ 23/Jun/13 9:27 PM ]

Here's a patch to add ddl/create-index and ddl/drop-index. I didn't add clojure.java.jdbc/create-index (and drop-) since clojure.java.jdbc/create-table is deprecated.

Comment by Sean Corfield [ 24/Jun/13 12:28 PM ]

Thanx. The patch is not in the correct format (it's just a diff, not a patch) - can you redo it?

Comment by Matt Oquist [ 25/Jun/13 8:27 AM ]

NP; sorry about that. Will this do?

Comment by Sean Corfield [ 27/Jun/13 12:31 PM ]

Perfect thanx!

Comment by Sean Corfield [ 27/Jun/13 12:33 PM ]

Patch merged. Documentation updated.





[JDBC-46] A mechanism for reading specialized types from result sets could be useful Created: 23/Jan/13  Updated: 26/May/13  Resolved: 19/Apr/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Gary Fredericks Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File JDBC-46-p1.patch     Text File JDBC-46-p2.patch     Text File JDBC-46-p3.patch    

 Description   

I had this idea when trying to adapt Korma to use with postgres tables with columns of array types. I had some success using Korma's transform functions to convert the java.sql.Array objects into a seq on the way out, but recent changes to Korma seem to indicate that the transform functions are not guaranteed to run before the connection is closed. This interferes with efforts to read the arrays, which I think can be their own result sets.

Presumably korma could be changed to support reading the arrays on the way out, but maybe being able to customize the read behavior within c.j.jdbc would be simpler.

The first idea I've had toward this end is to create a protocol with a single function that objects being read out of the resultset get passed through. It would default to the identity function, but users could e.g. extend it to java.sql.Array. I don't know if there are significant performance implications for an extra function call for each value coming out of the database.

I'll have a patch in this vein attached shortly.



 Comments   
Comment by Gary Fredericks [ 23/Jan/13 10:26 AM ]

Attached patch. Is there any straightforward way to test this? Or a place it should be documented?

Comment by Sean Corfield [ 23/Jan/13 10:48 AM ]

Docs: just needs good docstring since the clojure.github.com site is auto-generated from those (although it is still triggered manually it seems).

Test: mvn test

I have Derby, HSQLDB, MySQL, MS SQL Server (two drivers), MySQL. I don't have Postgres to test against. By default mvn test only tries Derby, HSQLDB and SQLite. TEST_DBS=... mvn test will test more.

Comment by Sean Corfield [ 23/Jan/13 10:49 AM ]

Would be nice to have a test with a custom conversion, mapping any String to some constant perhaps? Perhaps a separate test ns that is required and tested from test_jdbc.clj?

Comment by Gary Fredericks [ 23/Jan/13 2:38 PM ]

Oh, we have to be careful with testing because I believe any extensions to the protocol are going to be effective for all the remaining tests, right?

Comment by Gary Fredericks [ 23/Jan/13 3:53 PM ]

Attached new patch with docstrings and a test. I've only run the tests with MySQL. Let me know if there's anything else I can do.

Comment by Gary Fredericks [ 29/Jan/13 4:37 PM ]

This patch recently came in handy in an different situation, when jdbc was returning Integers instead of Longs, we had a really clean spot to do the conversion.

Comment by Sean Corfield [ 07/Apr/13 3:44 AM ]

There have been a lot of code changes since you submitted the patch and right now the patch doesn't apply. Can you re-submit the patch against the latest master branch so I can try again? Thanx!

Comment by Gary Fredericks [ 11/Apr/13 9:32 PM ]

I'm getting 3 test failures on master currently (with a basic mvn test), and after applying this new patch (p3) I still get 3 failures. So hopefully that means I didn't break anything.

Comment by Sean Corfield [ 19/Apr/13 9:34 PM ]

Applied. Will be in 0.3.0-alpha2

Comment by Gary Fredericks [ 15/May/13 9:52 AM ]

I'm thinking this feature is no longer necessary due to the :row-fn option – since the only motivation in the first place was to have the opportunity to transform a result before the connection closes.

There would still be no easy way to do it from Korma, but that's Korma's fault. So maybe we should remove this before 0.3.0 is released?

Comment by Sean Corfield [ 21/May/13 7:19 PM ]

This turns out to be a very useful hook for certain database types. I used it to solve an issue with Oracle, to enable users to automatically convert Oracle-specific datatypes to regular values, in JDBC-57. So, thank you... even if you think it isn't necessary now!

Comment by Gary Fredericks [ 21/May/13 7:53 PM ]

That use case could be served by :row-fn, right? You just like that it's easier? I don't mind it if others still find it valuable, I just wanted to make sure we didn't accidentally cement redundant features.

Comment by Sean Corfield [ 21/May/13 9:40 PM ]

:row-fn is intended for whole-row transformations - it is lazily mapped over the result set - so it's really meant for post-processing your data in a general sense while the connection is open. The protocol-based column reader is intended to transform specific types of data, independent of any application semantics (e.g., Oracle returns an oracle.sql.TIMESTAMP which can be automatically transformed into a java.sql.Timestamp or Date or whatever).

And :result-set-fn is intended for whole-result-set transformation, just prior to returning the result set.

Comment by Gary Fredericks [ 26/May/13 4:27 PM ]

Sounds good to me.





[JDBC-58] get-connection returns an error Created: 24/May/13  Updated: 24/May/13  Resolved: 24/May/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Salil Wadnerkar Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None
Environment:

PostgreSQL



 Description   

My db-spec is:
(def db-spec {:classname "org.postgresql.Driver"
:subprotocol "postgresql"
:subname "sd_ventures_development"
:user "sd_ventures"
:password ""})

When I query using the old API:
(defn get-devices []
(jdbc/with-connection db-spec
(jdbc/with-query-results rs ["SELECT * FROM devices"]
(doall rs))))

it works and I get the list of devices.

With the new API -
(defn get-devices []
(let [conn (jdbc/get-connection db-spec)]
(jdbc/query conn ["SELECT * FROM devices"])))
it returns an error:
Exception: java.lang.IllegalArgumentException: db-spec org.postgresql.jdbc4.Jdbc4Connection@4c54daa9 is missing a required parameter
jdbc.clj:221 clojure.java.jdbc/get-connection
jdbc.clj:628 clojure.java.jdbc/db-with-query-results*

I am completely bowled over by this error. I checked the source code. It should never reach that statement in get-connection because I am supplying both subname and subprotocol.



 Comments   
Comment by Sean Corfield [ 24/May/13 11:10 AM ]

Not a bug. You are using the API incorrectly. query takes a db-spec NOT a connection:

(jdbc/query db-spec ["SELECT * FROM devices"])





[JDBC-55] More reflection warnings to investigate Created: 14/May/13  Updated: 21/May/13  Resolved: 21/May/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File jdbc-55-fix-reflection-warnings-v1.txt    

 Description   

Reflection warning, clojure/java/jdbc.clj:618:50 - reference to field executeQuery can't be resolved.
Reflection warning, clojure/java/jdbc.clj:618:33 - reference to field close can't be resolved.



 Comments   
Comment by Andy Fingerhut [ 15/May/13 10:34 AM ]

Patch jdbc-55-fix-reflection-warnings-v1.txt dated May 15 2013 adds one more type hint that eliminates the two mentioned instances of reflection.

Comment by Sean Corfield [ 21/May/13 10:08 AM ]

Patch applied. Thank you!





[JDBC-57] Better support for Oracle Created: 18/May/13  Updated: 20/May/13  Resolved: 20/May/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Manish Handa Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None
Environment:

Oracle 10g and above


Attachments: Text File datatype.patch     Text File oracle-fix.patch    

 Description   

This is just a suggestion, and i am not sure if it is the right way to do this.

When fetching data from Oracle the oracle jdbc driver returns all the numeric type columns as BigDecimal and date/datetime columns as oracle.sql.TIMESTAMP. This causes a lot of headache as we have to remember to convert the data to the correct type.

Although, this can be handled at the application level by using a :row-fn, it would be really nice if this library handles the conversion itself.



 Comments   
Comment by Manish Handa [ 18/May/13 2:10 PM ]

Patch for handling results returned by oracle jdbc thin driver

Comment by Sean Corfield [ 19/May/13 1:44 PM ]

I think the right solution here will be to modify the result-set-read-column protocol function to be passed rsmeta and i and then Oracle users can extend the protocol to those types and perform the conversions they want. I don't believe automcatically converting BigDecimal is going to be the right choice for all users although doing the conversion for oracle.sql.TIMESTAMP is a more arguable conversion.

Comment by Manish Handa [ 20/May/13 9:57 AM ]

Agreed. I thought of this, but it requires changing the protocol to take rsmeta along with the column index. i have uploaded another patch for this change.

Also, it'll be better to leave the handling of oracle.sql.Timestamp to the application as it needs a dependency on the oracle thin driver (which we should certainly not add to java.jdbc).

So, after this updated protocol, the oracle users can extend it with something like this (much better now):

;;; Oracle users can define this in their applications
;; to extend it to BigDecimal and other oracle types
(extend-protocol IResultSetReadColumn
oracle.sql.TIMESTAMP
(result-set-read-column [x _ _] (.timestampValue x))

java.math.BigDecimal
(result-set-read-column [val rsmeta idx]
(if-not (= (.getScale rsmeta idx) 0)
(.doubleValue val) ;;; reals
(if (= (.getPrecision rsmeta idx) 1)
(= 1 (.longValue val)) ;;; booleans
(.longValue val))))) ;; integrals

Comment by Sean Corfield [ 20/May/13 10:45 AM ]

Thanx. Yes, that was where I was going with this, and while we're in alpha for 0.3.0 I'm not averse to changing the signatures of new stuff introduced in the API rewrite. I'll review the patch later and probably roll it in this week (I just got back from a web tech conference in MN and I'm busy catching up on work!).

Comment by Sean Corfield [ 20/May/13 10:52 PM ]

Patch applied. Tests updated to match. Will be in 0.3.0-alpha5.





[JDBC-13] Add unit tests for generated keys Created: 25/Jul/11  Updated: 07/May/13  Resolved: 07/May/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Minor
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Need to test that generated keys come back as expected for various databases - and that DBs which can't generate keys let you use transactions and select identity kind of SQL stuff.



 Comments   
Comment by Sean Corfield [ 07/May/13 9:28 PM ]

This has sort of been handled along the way with functions used in the test suite to reflect how various databases handle key generation / return. Surprisingly few seem to reliably return keys on an insert





[JDBC-54] db-transaction does not close connection? Created: 01/May/13  Updated: 07/May/13  Resolved: 07/May/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Critical
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

See https://gist.github.com/aphyr/5493964



 Comments   
Comment by Sean Corfield [ 03/May/13 12:39 PM ]

Having reviewed the code, the result of get-connection is never closed, only the result of calling .getConnection on a PreparedStatement and that matches how java.jdbc has always worked, so I don't believe this is a problem. Aphyr mentioned on Twitter that he believes the problem was just due to his test code and not inherent in java.jdbc but we are still waiting for confirmation. Lowering priority for now.

Comment by Sean Corfield [ 07/May/13 8:30 PM ]

I believe I have verified this locally. Under the old API, a connection was only added explicitly at the "top" of a call chain. Under the new API, a connection can be added on-demand in many places which makes the logic much more complex and there are a couple of places where normal usage could indeed cause a connection leak.

Comment by Sean Corfield [ 07/May/13 9:27 PM ]

I believe this is resolved in the latest snapshot (which will become Alpha 4 at some point), as I've gone thru and changed how get-connection is called and ensured that with-open is used whenever a new connection is actually created.

The code needs a fair bit of clean up now tho'. I'm not happy with the result of adding that logic so I need to spend some time refactoring...





[JDBC-51] Reflection warning removal Created: 07/Apr/13  Updated: 19/Apr/13  Resolved: 19/Apr/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Critical
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Before releasing 0.3.0, make a pass over the code and remove reflection where reasonable.



 Comments   
Comment by Sean Corfield [ 19/Apr/13 9:52 PM ]

Declared get-connection to return Connection in 0.3.0-alpha2.





[JDBC-52] Allow execute! to do batch updates via :multi? optional argument Created: 19/Apr/13  Updated: 19/Apr/13  Resolved: 19/Apr/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

See https://groups.google.com/d/msg/clojure-java-jdbc/H3spAaWgkZI/9WJPXJsHiVsJ for background and justification.

execute! would take an optional :multi? argument, default false, and execute-helper would either call db-do-prepared directly (as it does now) or apply it, assuming (rest sql-params) is a sequence of vectors, which would become multiple params-groups.

I think this would also allow insert! to be simplified and implemented in terms of execute! which would be a nice symmetry.



 Comments   
Comment by Sean Corfield [ 19/Apr/13 9:18 PM ]

Implemented. Will be in alpha2.





[JDBC-43] Rewrite old API (with *db*) in terms of new API Created: 26/Dec/12  Updated: 07/Apr/13  Resolved: 07/Apr/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Now that the new API exists, the old API should be rewritten in terms of the new API, passing db and binding it appropriately. Where no new equivalent exists, extend the new API (in particular, the form of insert that just provides rows without column names is not supported in the new API).



 Comments   
Comment by Sean Corfield [ 30/Dec/12 7:51 PM ]

Most of this has been done now. It needs extensive testing!

Comment by Gerrard McNulty [ 18/Feb/13 4:05 PM ]

When are you planning on making a stable release? Are the new APIs going to change at all?

Comment by Sean Corfield [ 19/Feb/13 6:41 PM ]

There are still a number of open tickets that need completing before I can call this a stable release. Folks are welcome to test the SNAPSHOT release on Sonatype and provide feedback.

Comment by Sean Corfield [ 07/Apr/13 2:23 PM ]

0.3.0-alpha1 has been released so this is "done". What's left is polishing and bug fixing.





[JDBC-41] How to preserve order of rows? Created: 24/Dec/12  Updated: 07/Apr/13  Resolved: 07/Apr/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Konstantin Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

I'd like to get the ordered record set within with-query-results
(or the way to restore the original order).
Is this possible with clojure/java.jdbc?



 Comments   
Comment by Sean Corfield [ 24/Dec/12 12:49 PM ]

I assume you mean the order of columns within the rows, not the actual order of rows within the result set?

I'm looking at providing an alternative result set format using arrays instead of maps which will preserve column orders.

Since maps are unordered by default in Clojure, I'd be suspicious of code that relies on ordering of keys (so I'm reducing this to 'minor').

Comment by Konstantin [ 24/Dec/12 10:54 PM ]

You are right, I mean the order of columns, not the rows.

I understand, that maps are unordered. There are few reasons when
the order is matters. The one of them is my task - simple
SQL database client.

Probably it would be better to do this possibility optional,
since it is not used very often?

Comment by Sean Corfield [ 25/Dec/12 8:26 PM ]

An optional keyword argument sounds like a reasonable compromise. I'm already looking at keyword arguments on resultset-seq for returning arrays instead of maps and returning an order-preserving map should be as simple as replacing (zipmap keys (row-values)) with (apply array-map (interleave keys (row-values)))

Comment by Sean Corfield [ 07/Apr/13 3:38 AM ]

Fixed in 0.3.0-SNAPSHOT by allowing :as-arrays argument to query and result-set-seq functions, which returns result as vector of column names, followed by vectors for each row.





[JDBC-42] insert! always uses a transaction Created: 26/Dec/12  Updated: 06/Apr/13  Resolved: 06/Apr/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

execute! delete! and update! all accept an optional :transaction? argument that allows you to control whether the operation occurs in a transaction or not.



 Comments   
Comment by Sean Corfield [ 06/Apr/13 6:53 PM ]

Fixed in 0.3.0-SNAPSHOT - :transaction? argument is now supported on insert!





[JDBC-45] insert! (and insert) need to support insert of full rows with no column names Created: 27/Dec/12  Updated: 06/Apr/13  Resolved: 06/Apr/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

In order to fully deprecate insert-values (and to deprecate insert-rows at all), the new insert logic needs to support lack of column names.



 Comments   
Comment by Sean Corfield [ 06/Apr/13 4:38 PM ]

Fixed in 0.3.0-SNAPSHOT and insert-values / insert-rows are fully deprecated.





[JDBC-49] jdbc insert-records can not handle spaces in columnames Created: 21/Mar/13  Updated: 06/Apr/13  Resolved: 06/Apr/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Cees van Kemenade Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: bug
Environment:

Postgres on Debian


Attachments: File jdbcIssue2.clj    

 Description   

If a database contains column-names that include spaces an sql/insert-record will fail. However, inserting via sql/do-commands or sql/insert-rows works as expected.

Attached you find a file showing the error.



 Comments   
Comment by Cees van Kemenade [ 22/Mar/13 1:21 AM ]

Further analysis based on a suggestion of Sean leads to:

It seems to be that there is an issue in the naming-strategy in this issue. Either in the way I define it of the way it is implemented
(sql/with-connection db
(sql/with-naming-strategy {:keyword identity :entity (fn [x] (str \" x \"))}
...
))

When I remove the naming-strategy and replace the offending insert by a manual quoting strategy via
(sql/insert-records "test_jdbc_naming3" {((fn [x] (str \" x \")) "b- b") 7})
which corresponds to
(sql/insert-records "test_jdbc_naming3" {"\"b- b\"" 7})
then the code runs fine.

So the issue is located in the handling of the naming-strategy, or in the way I defined the naming-strategy (for postgres).

Comment by Sean Corfield [ 06/Apr/13 4:14 PM ]

Fixed in 0.3.0-SNAPSHOT by wiring as-str variable into compatibility layer as :entities argument.





[JDBC-50] insert-rows (incorrectly) patches incomplete records Created: 21/Mar/13  Updated: 06/Apr/13  Resolved: 06/Apr/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Cees van Kemenade Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: bug, docs
Environment:

Postgres database on Debian machine


Attachments: File jdbcIssue.clj    

 Description   

When using sql/insert-rows with a dataset that contains incomplete rows the rows will be patched with values from the preceding rows. The behaviour to be expected is not documented.

I would expect insert-rows either only accepts complete rows. When accepting incomplete rows I would expect patching with nil-values instead of values from other rows.

Attached you find an example and the output when running in comments.



 Comments   
Comment by Sean Corfield [ 06/Apr/13 3:10 PM ]

This looks like an old / long-standing bug that no one has run into before.

In the new (soon-to-be 0.3.0) release, (clojure.java.jdbc.sql/insert :table [1 1] [2] []) would throw an exception:

IllegalArgumentException insert called with inconsistent number of columns / values clojure.java.jdbc.sql/insert-multi-row (sql.clj:116)

I will update clojure.java.jdbc/insert-rows to throw a similar exception.

Comment by Sean Corfield [ 06/Apr/13 3:38 PM ]

Fixed in 0.3.0-SNAPSHOT by throwing an exception if the value-groups are not all the same length.

Currently relies on the DB to trap column names not matching value groups (which is known not to work on SQLite but does work on other DBs as far as I know).

insert-rows is deprecated in 0.3.0 - use insert! instead going forward (which does more validation).





[JDBC-47] Transactions do not rollback on non-Exception Throwables Created: 26/Feb/13  Updated: 26/Feb/13  Resolved: 26/Feb/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Gary Fredericks Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None
Environment:

0.2.3


Attachments: Text File JDBC-47-p1.patch    

 Description   

transaction* currently catches Exceptions for its rollback logic. This means Throwables subclassing java.lang.Error are not caught, and indeed the transaction seems to be immediately committed in this case (at least if the connection has autocommit set to true).

This happened to me in the case of an OutOfMemoryError. Another example is that the following code commits:

(transaction
  (insert-records "table" {:foo 12})
  (assert false))

Presumably this can be fixed by catching Throwable instead of Exception. I'll prepare a patch to that effect.



 Comments   
Comment by Sean Corfield [ 26/Feb/13 4:48 PM ]

Thanx Gary. In the latest source there may be two pieces of transaction logic so please check for anywhere else that Exception is mentioned!

Comment by Gary Fredericks [ 26/Feb/13 5:17 PM ]

This patch changes Exception to Throwable in two places – transaction* and db-transaction*. The tests pass.

Comment by Sean Corfield [ 26/Feb/13 8:08 PM ]

Patch applied.





[JDBC-38] There is no way to execute a raw statement outside a transaction Created: 12/Aug/12  Updated: 27/Dec/12  Resolved: 26/Dec/12

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Minor
Reporter: Rasmus Svensson Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None
Environment:

Dependencies used:

[org.clojure/clojure "1.4.0"]
[org.clojure/java.jdbc "0.2.3"]
[org.xerial/sqlite-jdbc "3.7.2"]



 Description   

To be able to enforce foreign key constraints in SQLite you need to execute the following statement:

PRAGMA foreign_keys = ON;

However, this statement can't be used from within a transaction. From the SQLite docs:

It is not possible to enable or disable foreign key constraints in the middle of a multi-statement transaction (when SQLite is not in autocommit mode). Attempting to do so does not return an error; it simply has no effect.

In clojure.java.jdbc both do-commands and do-prepared unconditionally run the supplied statements in a transaction. To me it seems like these two functions are the only ones that provide a way to execute an arbitrary SQL statement.

Example using [org.clojure/java.jdbc "0.2.3"] and [org.xerial/sqlite-jdbc "3.7.2"]:

(sql/with-connection {:classname   "org.sqlite.JDBC"
                      :subprotocol "sqlite"
                      :subname     "foo.db"}
  (sql/do-commands "PRAGMA foreign_keys = ON;")
  (sql/with-query-results res ["PRAGMA foreign_keys;"]
    (doall res)))
; => ({:foreign_keys 0})

When using this function instead of do-commands foreign keys work as expected:

(defn do-raw  [& commands]
  (with-open [^java.sql.Statement stmt (let [^java.sql.Connection con (sql/connection)] (.createStatement con))]
    (doseq [^String cmd commands]
      (.addBatch stmt cmd))
    (let [result (.executeBatch stmt)]
      (if (and (= 1 (count result)) (= -2 (first result)))
        (list (.getUpdateCount stmt))
        (seq result)))))

This function is simply do-commands with execute-batch inlined and the wrapping transaction form removed.



 Comments   
Comment by Sean Corfield [ 12/Aug/12 9:33 PM ]

It seems like the more flexible thing to do would be to allow users to specify that a transaction not be used on the existing operations. I'm reworking the API substantially soon so I'll bear that in mind. I might not make it possible with the existing API but I'll make sure that's an option in the new API that I'm adding.

Comment by Sean Corfield [ 26/Dec/12 4:11 PM ]

0.2.4-SNAPSHOT contains db-do-commands which allows bare execution of commands against a specific database connection with or without a transaction wrapping the commands.

Comment by Rasmus Svensson [ 27/Dec/12 7:03 AM ]

Neat!





[JDBC-44] Add db-do-commands to match new API Created: 26/Dec/12  Updated: 26/Dec/12  Resolved: 26/Dec/12

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

New version of do-commands but with db explicitly passed in and transaction? argument - this will satisfy JDBC-38.



 Comments   
Comment by Sean Corfield [ 26/Dec/12 4:10 PM ]

db-do-commands added





[JDBC-20] Create new clean API that accepts connection etc (post 0.1.0) Created: 12/Oct/11  Updated: 26/Dec/12  Resolved: 26/Dec/12

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Create streamlined API that relies on no bindings but instead accepts everything as an argument. Refactor existing API methods to operate in terms of this new API as appropriate.



 Comments   
Comment by Sean Corfield [ 12/Oct/11 4:28 PM ]

See JDBC-3 for some more background.

Comment by Sean Corfield [ 26/Dec/12 3:41 PM ]

The new API has been added. There is also an optional DSL to make it slightly easier to generate SQL that can be used with both the original and the new API.

I will open tickets for follow-up issues.





[JDBC-35] Wrong timezone for java.sql.Date, java.sql.Time and java.sql.Timestamp objects returned by with-query-results Created: 20/Jul/12  Updated: 12/Sep/12  Resolved: 12/Sep/12

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Jestine Paul Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: bug

Attachments: File resultset-timezone.diff    

 Description   

Our database stores date and time related information in UTC. But, the results from with-query-results creates objects in the local timezone. This is causing a lot of unexpected behavior when constructing Joda DateTime objects in clj-time. Currently, the with-query-results function is using the single argument getter functions in the ResultSet

getDate(int columnIndex)
getTime(int columnIndex)
getTimestamp(int columnIndex)

We can solve this problem if we can optionally pass the timezone information and call the two argument getter functions.

getDate(int columnIndex, Calendar cal)
getTime(int columnIndex, Calendar cal)
getTimestamp(int columnIndex, Calendar cal)



 Comments   
Comment by Sean Corfield [ 21/Jul/12 1:30 AM ]

Can you provide a self-contained test case? Lots of people are using java.jdbc in production without running into this problem, and at World Singles we've had this in production for a long time in a high traffic environment without seeing any problems with timezones. Dates go in and out of the database unchanged, which is exactly as expected - and we have databases running in three different timezones.

Comment by Jestine Paul [ 21/Jul/12 5:29 AM ]

I have added a new test and it is failing with my Postgresql.

https://github.com/jestinepaul/java.jdbc/commit/195397b1b2a0245d2439ab9963fe2138450a27f3

Comment by Sean Corfield [ 21/Jul/12 1:03 PM ]

Looking at recommended practices out there, such as listed here: http://stackoverflow.com/questions/2532729/daylight-saving-time-and-timezone-best-practices one important thing is: "On Servers, set hardware clocks and OS clocks to UTC. Use NTP services on all servers." The same advice is repeated here: http://www.dbspecialists.com/blog/database-theory/intelligent-date-handling/

Looking further into PostgreSQL (a database I generally don't use), I see this: http://stackoverflow.com/questions/6151084/which-timestamp-type-to-choose-in-a-postgresql-database which seems to contain a lot of PostgreSQL-specific stuff. However, that article makes it clear that you need to set the database timezone to UTC in order to SELECT timestamp columns correctly.

In my opinion, the behavior you're seeing is not a bug (in java.jdbc) but an artifact of your environment being set up incorrectly. I'll leave this ticket open for a little while for more discussion but without a concrete patch that is shown to not affect other use cases, I will close this ticket by the end of August.

Comment by Jestine Paul [ 23/Jul/12 1:26 PM ]

As mentioned earlier, the database server timezone is in UTC and the JDBC library runs on a client machine set to local time. If the database has a date value of 2012-7-23, it changes to 2012-7-22 on the client side (if the client is running at timezone greater than UTC) when coerced using to-date-time in clj-time. This is extremely dangerous and is not specific to any database. I noticed it first on Sybase Enterprise Server and I have now also replicated it in the test case with Postgresql.

I have attached a patch which fixes this problem by passing in an optional parameter. The test case is also modified to use clj-time, as it expresses the problem more clearly. Please let me know if you need any more clarification.

p.s. I have already mailed the CA and should reach Durham in a few days.

Comment by Sean Corfield [ 12/Sep/12 8:01 PM ]

Since the proposed patch requires the user to pass in a list of columns to treat specially - and reorders columns in results and adds some performance overhead for all users, not just those wanting to adjust column values - I believe users who choose to run JDBC clients in timezones different to the database, against the widely-listed best practice recommendations, should bear the burden of adjusting the columns themselves in their own client code. Since they already know which columns to adjust and they know the difference between UTC and local time, they should be able to make the adjustments easily enough as part of post-processing the resultset-seq.





[JDBC-18] Consider ["name = ? and age = ?" nil 42] Created: 11/Oct/11  Updated: 12/Aug/12  Resolved: 12/Aug/12

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Duplicate Votes: 0
Labels: None


 Description   

The query is "name = nil and age = 42" where the intent is probably "name is null and age = 42".

It's not clear whether this is really the library's responsibility (to rewrite the SQL) or whether this should be handled by another layer above so this ticket is a placeholder for discussion.



 Comments   
Comment by Julian Birch [ 13/Oct/11 1:43 PM ]

I've been thinking about this a bit more, and I'm thinking that a function "match-clause" might deal with this in a fairly composable way. So

(match-clause { :name nil :age 42 })

would return

["Name is null and Age = ?" 42]

It's not exactly ClojureQL, but it's non-invasive and makes query-by-example easy to achieve.

Comment by Sean Corfield [ 13/Oct/11 2:28 PM ]

That's nice idea. I actually do something similar in the CRUD wrapper I built around java.jdbc at World Singles so that might be a nice addition to the library.

Comment by Sean Corfield [ 12/Aug/12 9:39 PM ]

This will be incorporated into the new API I'm adding in JDBC-20 so you can say (where {:name nil :age 42}) and it will return the correct SQL where clause. Accordingly, I'm closing this as a duplicate.

Comment by Sean Corfield [ 12/Aug/12 9:39 PM ]

A subset of what will be implemented as part of JDBC-20





[JDBC-36] using exisiting java based connection pool in clojure Created: 24/Jul/12  Updated: 12/Aug/12  Resolved: 12/Aug/12

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Amir Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None


 Description   

We have a connection pool in Java and would like to use the same for our complete project where code is mixture of Clojure/Java.

(def db (ConnectionManager/getConnection)) ; #<ConnectionHandle com.jolbox.bonecp.ConnectionHandle@53371566> connection type we get after excuting this line
sql/with-connection db
(sql/with-query-results res ["select * from DummyTable where rownum <=100"] (vec res)
(map #(println (class (val %)) " " (key %)) (first res))
)
)



 Comments   
Comment by Amir [ 24/Jul/12 3:00 AM ]

If we execute the above (with connection - with-query results) i get the following exception

IllegalArgumentException db-spec com.jolbox.bonecp.ConnectionHandle@53371566 is missing a required parameter clojure.java.jdbc/get-connection (jdbc.clj:192)

Comment by Sean Corfield [ 24/Jul/12 12:34 PM ]

Use the :datasource option to define your db-spec:

(def db {:datasource bonecpDataSourceObject})

I looked at the BoneCP docs and I don't see a ConnectionManager class but if you define the db-spec based on the BoneCPDataSource object, java.jdbc will call getConnection on that object to get a working connection for operations.

Comment by Sean Corfield [ 31/Jul/12 7:49 PM ]

Amir, have you had a chance to try my suggestion?

Comment by Amir [ 01/Aug/12 2:20 AM ]

Hi Sean,

No, we moved to clojure based pooling in a while. We thought its better to have all stuff in clojure rather than having java+clojure. But i can give it a try today.

Thanks for your suggestion. will keep you posted.

Comment by Sean Corfield [ 12/Aug/12 9:35 PM ]

I'll close this out then Amir, thanx.

Comment by Sean Corfield [ 12/Aug/12 9:36 PM ]

I believe there is a way to access the the BoneCP connection pool directly and use it with java.jdbc as-is, however it is no longer an issue for the creator of this ticket.





[JDBC-34] get-connection's URI conventions incompatible with PostgreSQL Created: 12/Jun/12  Updated: 31/Jul/12  Resolved: 18/Jun/12

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Chas Emerick Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

get-connection requires that provided URIs be structured like so:

dbtype://user:password@host:port/database

This is often sufficient, but many PostgreSQL URIs require the use of URI parameters to further configure connections. For example, postgresql.heroku.com provides JDBC URIs like this:

jdbc:postgresql://ec2-22-11-231-117.compute-1.amazonaws.com:5432/d1kuttup5cbafl6?user=pcgoxvmssqabye&password=PFZXtxaLFhIX-nCA0Vi4UbJ6lH&ssl=true

...which, when used outside of Heroku's network, require a further sslfactory=org.postgresql.ssl.NonValidatingFactory parameter.

The PostgreSQL JDBC driver supports a number of different URI parameters, and recommends putting credentials into parameters rather than using the user:password@ convention. Peeking over at the Oracle thin JDBC driver's docs, it appears that it expects credentials using its own idiosyncratic convention, user/password@.

<opinion>
This all leads me to think that get-connection should pass URIs along to DriverManager without modification, and leave URI format conventions up to the drivers involved. For now, my workaround is to do essentially that, using a map like this as input to with-connection et al.:

{:factory #(DriverManager/getConnection (:url %))
 :url "jdbc:postgresql://ec2-22-11-231-117.compute-1.amazonaws.com:5432/d1kuttup5cbafl6?user=pcgoxvmssqabye&password=PFZXtxaLFhIX-nCA0Vi4UbJ6lH&ssl=true"}

That certainly works, but I presume that such a workaround won't occur to many users, despite the docs/source.
</opinion>

I don't think I've used java.jdbc enough (or RDMBS' enough of late) to comfortably provide a patch (or feel particularly confident in the suggestion above). Hopefully the report is helpful in any case.



 Comments   
Comment by Sean Corfield [ 14/Jun/12 1:36 AM ]

How about an option that takes a map like:

{:connection-uri "jdbc:postgresql://ec2-22-11-231-117.compute-1.amazonaws.com:5432/d1kuttup5cbafl6?user=pcgoxvmssqabye&password=PFZXtxaLFhIX-nCA0Vi4UbJ6lH&ssl=true"}

Essentially as a shorthand for the workaround you've come up with?

Comment by Sean Corfield [ 15/Jun/12 10:20 PM ]

Try 0.2.3-SNAPSHOT which has support for :connection-uri and let me know if that is a reasonable solution for you?

Comment by Chas Emerick [ 18/Jun/12 3:35 PM ]

Yup, 0.2.3-SNAPSHOT's :connection-uri works fine. I've since moved on to using a pooled datasource, but this will hopefully be a more obvious path to newcomers than having to learn about :factory and DriverManager.

Comment by Sean Corfield [ 18/Jun/12 3:52 PM ]

Resolved by adding :connection-uri option.

Comment by Carlos Cunha [ 28/Jul/12 8:09 PM ]

accessing an heroku database outside heroku, "sslfactory=org.postgresql.ssl.NonValidatingFactory" doesn't work. i get "ERROR: syntax error at or near "user" Position: 13 - (class org.postgresql.util.PSQLException". this happens whether adding it to :subname or :connection-uri Strings

another minor issue - why the documentation of "with-connection" (0.2.3) refers the following format for the connection string URI:
"subprotocol://user:password@host:post/subname
An optional prefix of jdbc: is allowed."
but the URI which can actually be parsed successfully is like the one above: jdbc:postgresql://ec2-22-11-231-117.compute-1.amazonaws.com:5432/d1kuttup5cbafl6?user=pcgoxvmssqabye&password=PFZXtxaLFhIX-nCA0Vi4UbJ6lH&ssl=true
"subprotocol://user:password@host:post/subname" (format like the DATABASE environment variables on heroku) will not be parsed correctly. why the format for the URI that is used on heroku is not supported by the parser?

maybe i'm doing something wrong here

thanks in advance

Comment by Sean Corfield [ 29/Jul/12 4:57 PM ]

Carlos, the :connection-uri passes the string directly to the driver with no parsing. The exception you're seeing is coming from inside the PostgreSQL driver so you'll have to consult the documentation for the driver.

The three "URI" styles accepted by java.jdbc are:

  • :connection-uri - passed directly to the driver with no parsing or other logic in java.jdbc,
  • :uri - a pre-parsed Java URI object,
  • a string literal - any optional "jdbc:" prefix is ignored, then the string is parsed by logic in java.jdbc, based on the pattern shown (subprotocol://user:password@host:port/subname).

If you're using :connection-uri (which is used on its own), you're dealing with the JDBC driver directly.

If you're using :uri or a bare string literal, you're dealing with java.jdbc's parsing (implemented by Phil Hagelberg - of Heroku).

Hope that clarifies?

Comment by Carlos Cunha [ 29/Jul/12 8:36 PM ]

Sean, thank you for such comprehensive explanation.

Still, it didn't work with any of the options. I used before a postgres JDBC driver to export to the same database (in an SQL modeller - SQLEditor for the MAC) and it worked (though it would connect some times, but others not). The connection String used was like "jdbc:postgresql://host:port/database?user=xxx&password=yyy&ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory". The driver name was "org.postgresql.Driver" (JDBC4). Anyway, time to give up. I will just use a local database.

Thank you!

Comment by Carlos Cunha [ 31/Jul/12 7:20 PM ]

Sean, JDBC combinations were working after. i was neglecting an insert operation in a table with a reserved sql keyword "user", so i was getting a "ERROR: syntax error at or near "user" Position: 13", and therefore the connection was already established at the time.

i'm sorry for all the trouble answering the question (_

thank you

Comment by Sean Corfield [ 31/Jul/12 7:46 PM ]

Glad you got to the bottom of it and confirmed that it wasn't a problem in java.jdbc!





[JDBC-33] update-or-insert does not work with oracle Created: 21/May/12  Updated: 10/Jun/12  Resolved: 10/Jun/12

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Minor
Reporter: Brian Tatnall Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: bug
Environment:

clojure 1.3.0
clojure.java.jdbc 0.2.1
oracle 10.2.0.4.0



 Description   

According to the oracle documentation

For a prepared statement batch, it is not possible to know the number of rows affected in the database by each individual statement in the batch. Therefore, all array elements have a value of -2. According to the JDBC 2.0 specification, a value of -2 indicates that the operation was successful but the number of rows affected is unknown.

As documented update-values returns (-2) which means that the (if (zero?)) check in update-or-insert-values will never return true.

Oracle does provide the number of rows updated via an the method getUpdateCount on instances of Statement.

The following change fixes this problem for oracle users, but I am not sure how it will affect other drivers.

(defn oracle-do-prepared
  "Executes an (optionally parameterized) SQL prepared statement on the                                           
  open database connection. Each param-group is a seq of values for all of                                        
  the parameters.                                                                                                 
  Return a seq of update counts (one count for each param-group)."
  [sql & param-groups]
  (with-open [^PreparedStatement stmt (prepare-statement (connection) sql)]
    (if (empty? param-groups)
      (transaction* (ƒ [] (vector (.executeUpdate stmt))))
      (do
        (doseq [param-group param-groups]
          (set-parameters stmt param-group)
          (.addBatch stmt))
        (transaction* (ƒ [] (let [result (.executeBatch stmt)
                                  count  (.getUpdateCount stmt)]
                              (vector count))))))))


 Comments   
Comment by Sean Corfield [ 22/May/12 12:17 AM ]

Interesting. (.getUpdateCount stmt) is a standard part of the JDBC API so it looks reasonable to see if the result of the (.executeBatch stmt) is a single sequence with a -2 value, the code could call (.getUpdateCount stmt) and return that (single) value as a sequence. It looks like that should be completely portable but it will be hard to test since none of the other DBs return -2 as far as I know and I don't have an Oracle install to test against





[JDBC-32] with-connection does not accept standard jdbc URIs. Created: 18/May/12  Updated: 10/Jun/12  Resolved: 10/Jun/12

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Tyler Jennings Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None
Environment:

clojure 1.4.0
clojure.java.jdbc 0.2.1



 Description   

with-connect and other functions accepting a db-spec do not support standard jdbc uris, like this one:

jdbc:mysql://localhost:3306/<database>

In my environment passing that uri to yields an exception:

Exception in thread "main" java.lang.NullPointerException
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:247)
at clojure.lang.RT.loadClassForName(RT.java:2050)
at clojure.java.jdbc$get_connection.invoke(jdbc.clj:175)
at clojure.java.jdbc$get_connection.invoke(jdbc.clj:166)
at clojure.java.jdbc$get_connection.invoke(jdbc.clj:168)
at clojure.java.jdbc$with_connection_STAR_.invoke(jdbc.clj:299)
at bungle_o.migration$create_exceptions.invoke(migration.clj:17)
at bungle_o.migration$_main.invoke(migration.clj:21)
at clojure.lang.Var.invoke(Var.java:411)
at user$eval191.invoke(NO_SOURCE_FILE:1)
at clojure.lang.Compiler.eval(Compiler.java:6511)
at clojure.lang.Compiler.eval(Compiler.java:6501)
at clojure.lang.Compiler.eval(Compiler.java:6477)
at clojure.core$eval.invoke(core.clj:2797)
at clojure.main$eval_opt.invoke(main.clj:297)
at clojure.main$initialize.invoke(main.clj:316)
at clojure.main$null_opt.invoke(main.clj:349)
at clojure.main$main.doInvoke(main.clj:427)
at clojure.lang.RestFn.invoke(RestFn.java:421)
at clojure.lang.Var.invoke(Var.java:419)
at clojure.lang.AFn.applyToHelper(AFn.java:163)
at clojure.lang.Var.applyTo(Var.java:532)
at clojure.main.main(main.java:37)

This URI works fine, however:

mysql://localhost:3306/<database>

Is this intentional or a bug?

Here's the mysql documentation for reference: http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-configuration-properties.html



 Comments   
Comment by Sean Corfield [ 10/Jun/12 5:48 PM ]

It was a bug. Thanks for spotting that.





[JDBC-31] distinct? throws clojure.lang.ArityException, when applied with no arguments Created: 12/May/12  Updated: 10/Jun/12  Resolved: 10/Jun/12

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Jürgen Hötzel Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: patch,, performance

Attachments: Text File 0001-add-make-cols-unique-test-case-empty-cols.patch     Text File 0002-distinct-throws-clojure.lang.ArityException-when-app.patch    
Patch: Code and Test

 Description   

HSQLDB returns an empty ResultSet when using (.getGeneratedKeys stmt)
and no keys are generated. So this Exception is thrown for each record
without generated keys.

While this Exception is caught in do-prepared-return-keys, this can lead to a huge overhead caused by the JVM exception handling. I did a performance test.

Before Patch:

clojure.java.test-jdbc> (time (sql/with-connection hsqldb-db (count (apply sql/insert-records :dummy  (map #(hash-map :name (str %) :id %) (range 10000))))))
"Elapsed time: 3429.346743 msecs"
10000

After Patch:

 clojure.java.test-jdbc> (time (sql/with-connection hsqldb-db (count (apply sql/insert-records :dummy  (map #(hash-map :name (str %) :id %) (range 10000))))))
"Elapsed time: 1397.444753 msecs"


 Comments   
Comment by Sean Corfield [ 10/Jun/12 5:24 PM ]

Thanx for the patch!





[JDBC-30] Consider to make do-prepared-return-keys public Created: 10/May/12  Updated: 10/May/12  Resolved: 10/May/12

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Roman Scherer Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Libraries like Korma, that build upon clojure.java.jdbc depend
on, or have use cases for do-prepared-return-keys. It's a useful
function and we would like to use it. As discussed over here:

https://github.com/ibdknox/Korma/pull/60#issuecomment-5630604



 Comments   
Comment by Sean Corfield [ 10/May/12 2:12 PM ]

Made do-prepared-return-keys public. Will cut a new build.





[JDBC-29] Performance improvement (Remove intermediate lazy sequence in resultset-seq) Created: 10/May/12  Updated: 10/May/12  Resolved: 10/May/12

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Jürgen Hötzel Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: patch,, performance

Attachments: Text File 0001-Don-t-create-intermediate-lazy-sequences-of-vectors.patch    

 Description   

This improves performance on large result sets by up to 30%.



 Comments   
Comment by Sean Corfield [ 10/May/12 12:09 PM ]

Nice catch!





[JDBC-26] Add sqlite3 support Created: 13/Feb/12  Updated: 23/Apr/12  Resolved: 23/Apr/12

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Nelson Morris Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File mssql-error.log     Text File sqlite3-support.patch     Text File sqlite_transaction_test.patch    
Patch: Code and Test

 Description   

sqlite3 has some problems:

1) PreparedStatements.addBatch does not do anything without some parameters, so executeBatch doesn't do anything.
2) Transaction support appears to require closing the ResultSet object from generatedKeys.

I've only tested the included patch with the default test dbs and sqlite3.



 Comments   
Comment by Sean Corfield [ 14/Feb/12 1:01 AM ]

This will break things - see JDBC-16 which required that .addBatch be called when there are no param-groups. I'm not going to reject this out of hand but patches for new database support need testing against MySQL, PostgreSQL and MS SQL Server.

Comment by Nelson Morris [ 14/Feb/12 1:48 PM ]

Test suite runs correctly for TEST_DBS=mysql,postgres,derby,hsqldb,sqlite3 and TEST_DBS=mysql-str,postgres-str. This just leaves MS SQL server, which I do not have the ability to run.

Regarding JDBC-16, when I revert to just using .executeBatch for the no param-groups case I can see the errors produced in the test-suite. Using .executeUpdate for the no params-group case continues to fix these errors (note MS SQL server untested). I do not see any other information in the ticket that would explain why using .executeUpdate instead of .addBatch/.executeBatch for the no params-group case would break JDBC-16. Is there a reason I am missing?

Comment by Nelson Morris [ 17/Feb/12 10:05 PM ]

I ran the test suite against an ec2 instance with MS SQL server 2008 R2. Current test suite on master fails with attached mssql-error.log. Using patch produces the same output.

Please let me know if there is anything that you would like changed.

Comment by Sean Corfield [ 17/Feb/12 10:14 PM ]

Thanx Nelson. I've been a bit busy with work this week but I'll try to look at this at the weekend. I appreciate your thoroughness on testing this!

Comment by Sean Corfield [ 29/Feb/12 11:33 PM ]

Integrated parts of the patch. SQLite 3 is now supported. Thanx!

Comment by Nelson Morris [ 01/Mar/12 12:57 AM ]

I still get the transaction issue (part 2 from original description) when clojure.java.jdbc tries to return the generated keys, in a transaction, using sqlite. https://gist.github.com/1947746

I believe this is because the ResultSet from .getGeneratedKeys never gets .close called, so sqlite keeps the db locked and can't rollback. Would you like me to make a new issue?

Comment by Sean Corfield [ 01/Mar/12 1:03 AM ]

The tests pass locally (Mac OS X) and on the Clojure build server (Linux) but you're right that I didn't include the close call on the record set.

I didn't include your additional test which I'm guessing is the key to highlighting the errant behavior for SQLite?

(I couldn't actually apply your patch because I'd spent some time getting all the tests passing for MS SQL Server with both the Microsoft and jTDS drivers)

Comment by Nelson Morris [ 01/Mar/12 1:19 AM ]

Attached patch that fails with `mvn test`.

Comment by Sean Corfield [ 23/Apr/12 3:35 AM ]

I believe this was fixed in 0.1.3 (at the end of February). As far as I can tell, all of the patch components are applied. Specifically, your transaction test is included in the test suite and passes.





[JDBC-19] Consolidate jdbc / jdbc.internal namespaces (post 0.1.0) Created: 12/Oct/11  Updated: 23/Apr/12  Resolved: 23/Apr/12

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Since clojure.java.jdbc.internal is not intended to be called externally, merge it into clojure.java.jdbc and make its members private. Then merge simplify functions where appropriate.



 Comments   
Comment by Sean Corfield [ 23/Apr/12 3:22 AM ]

Fixed in 0.2.0 release.





[JDBC-28] prepared-statement* behaves different on OpenJDK 1.6 and OS X Java 1.6. Created: 30/Mar/12  Updated: 31/Mar/12  Resolved: 31/Mar/12

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Jeroen van Dijk Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None
Environment:

Ubuntu
OpenJDK:
java version "1.6.0_22"
OpenJDK Runtime Environment (IcedTea6 1.10.6) (6b22-1.10.6-0ubuntu1)
OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode)

OS X Java
java version "1.6.0_29"
Java(TM) SE Runtime Environment (build 1.6.0_29-b11-402-10M3527)
Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02-402, mixed mode)

Relevant clojure libraries
[org.clojure/clojure "1.3.0"]
[korma "0.3.0-beta9"]
[org.clojure/java.jdbc "0.1.3"]
[postgresql/postgresql "9.1-901.jdbc3"]



 Description   

I'm using the Clojure library Korma to generate SQL for Postgres, which depends on clojure.java.jdbc. Recently I added a custom 'upsert' query. Using it on normal java 1.6 is fine, but on openJDK it breaks due to the use of 'java.sql.Statement/RETURN_GENERATED_KEYS'. This makes the Postgres driver add a seemingly random "RETURNING *" at the end of the query.

On normal Java 1.6 java.sql.Statement/RETURN_GENERATED_KEYS doesn't exist and then a exception is caught and everything works as expected. Sounds like the world up side down, but I'm probably missing the rational behind this.

The line that causes the weird behavior is here: https://github.com/clojure/java.jdbc/blob/master/src/main/clojure/clojure/java/jdbc/internal.clj#L315

The code that showcases the 'bug' is here:
https://gist.github.com/2e8a3d55d80707ce79e0

(Sorry for the monkey patching in the gist, if there is a better way I would love to hear it)

I guess a naive patch would be to remove the java.sql.Statement/RETURN_GENERATED_KEYS line, but there must be a reason for its existence I assume.



 Comments   
Comment by Sean Corfield [ 31/Mar/12 1:30 AM ]

RETURN_GENERATED_KEYS is required for inserts to return the generated keys. Since this works on Sun's Java 6 and with all intended databases, and is per spec, it is not going away.

Right now OpenJDK / PostgreSQL is not a supported combination, sorry.





[JDBC-25] The classnames private var should be public Created: 07/Feb/12  Updated: 12/Feb/12  Resolved: 12/Feb/12

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Trivial
Reporter: Nicolas Buduroi Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None


 Description   

I've got a bunch of integration/system tests in Lobos which are run over multiple target database. As I do not want to force all target drivers to be made as dev-dependencies, I check if the driver is available before running those tests. As long as the classname key is in the db-spec it's all fine, but now java.jdbc support db-spec with that key, it would be useful to have access to the classnames var for me, so it would be great to have it made public. This isn't a blocker as up till now I don't really need it, but I'm planning to add some tests for string based db-spec.



 Comments   
Comment by Sean Corfield [ 12/Feb/12 4:21 AM ]

Right now, that's in the internal namespace which is going away so I can't make it part of the API. You can always pull private vars out with @#'clojure.java.jdbc.internal/classnames but bear in mind that internal ns will merge with the parent ns soon!





[JDBC-22] Infer :classname from :subprotocol Created: 17/Oct/11  Updated: 04/Jan/12  Resolved: 04/Jan/12

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Phil Hagelberg Assignee: Phil Hagelberg
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File 0001-Infer-classname-from-subprotocol.patch    
Patch: Code and Test
Waiting On: Sean Corfield

 Description   

Having to specify both :classname and :subprotocol seems like needless boilerplate. Only :subprotocol should be required.



 Comments   
Comment by Phil Hagelberg [ 17/Oct/11 1:48 PM ]

This should take care of it. Updated the tests to remove the inferred value.

Comment by Sean Corfield [ 04/Jan/12 11:53 PM ]

Sorry, I should have closed this out ages ago - when the patch was applied.





[JDBC-24] Make where-params optional for update-values function Created: 04/Jan/12  Updated: 04/Jan/12  Resolved: 04/Jan/12

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Trent Ogren Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None

Attachments: Text File optional-where-params-update-values.patch    
Patch: Code and Test

 Description   

This prevents having to pass a wonky always-true clause (e.g. ["1=1"]) as the where-params when updating all the rows in a table.



 Comments   
Comment by Trent Ogren [ 04/Jan/12 11:04 PM ]

I'm aware that I might have to sign the Clojure CA. I just thought I'd see what you think first...

Comment by Sean Corfield [ 04/Jan/12 11:43 PM ]

See JDBC-14 since I'm declining this for the same reason. Performing a global update is an unusual operation and I think the API should prevent you from accidentally updating all rows, just as it should prevent you from deleting all rows. Requiring a where clause does that.





[JDBC-21] Accept string URL as connection parameters Created: 17/Oct/11  Updated: 02/Nov/11  Resolved: 02/Nov/11

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Phil Hagelberg Assignee: Phil Hagelberg
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File 0001-Accept-string-or-URI-in-with-connection.patch     Text File jdbc-21-take2.patch    
Patch: Code and Test

 Description   

Should support connection parameters in this style:

(with-connection "postgres://user:password@host.com:port/db-name" [...])



 Comments   
Comment by Phil Hagelberg [ 17/Oct/11 1:18 PM ]

Talked this over with Sean on IRC; I'll put together a patch.

Comment by Phil Hagelberg [ 17/Oct/11 4:21 PM ]

Implements said functionality. Patch depends upon supplied patch for JDBC-22.

Comment by Phil Hagelberg [ 17/Oct/11 4:54 PM ]

Hold off on this patch; apparently the scheme and subprotocol don't always match up. Will provide an updated patch.

Comment by Phil Hagelberg [ 17/Oct/11 6:00 PM ]

This one should take care of the subprotocol issue; it also adds support for running the tests against postgres.

Comment by Sean Corfield [ 02/Nov/11 2:52 PM ]

Patches applied - thank you!





[JDBC-15] resultset-seq uses structmap which has been deprecated Created: 13/Aug/11  Updated: 16/Oct/11  Resolved: 16/Oct/11

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

structmap has been deprecated. The open question is whether to replace its use with record or regular map (I'd expect the latter would be less surprising?).



 Comments   
Comment by Sean Corfield [ 12/Oct/11 4:29 PM ]

See JDBC-3 for some background on this.

Comment by Sean Corfield [ 16/Oct/11 3:41 PM ]

The way resultset-seq uses create-struct and struct doesn't seem to leverage any inherent functionality of structmaps except perhaps some vague guarantee of ordering of columns in rows. Replacing the create-struct / struct calls with

(into {} (map vector keys (row-values)))

maintains the order up to 16 columns (PersistentArrayMap) so that seems a reasonable compromise.

I'm not sure that preserving order is something that should be documented or supported?

Comment by Sean Corfield [ 16/Oct/11 4:32 PM ]

No longer uses structmap. Attempts to provide column order (up to 16 columns) but don't rely on that... it was never documented as a guarantee!





[JDBC-3] Inconsistency between tuples and regular maps Created: 08/May/11  Updated: 12/Oct/11  Resolved: 12/Oct/11

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Minor
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Duplicate Votes: 0
Labels: None


 Description   

Stuart Sierra: In general, c.c.sql is inconsistent about using tuples of column values versus maps of column=>value pairs. Both are useful, and I'd like to see versions of all the core functions that support both modes.



 Comments   
Comment by Sean Corfield [ 13/Aug/11 6:59 PM ]

Having reviewed this after using the library for quite a while, I don't really see any inconsistencies except, perhaps in the naming of update-values (and update-or-insert-values).

insert-values is the base function, taking a table name, a vector of column names and any number of vectors of column values. Generates one insert statement (containing multiple rows). If only one row is inserted, returns a map of generated keys if the database supports that.

insert-rows takes a table name and any number of vectors of column values (for all columns); uses insert-values.

insert-records takes a table name and any number of maps (containing column/value pairs); uses insert-values to insert one record at a time (and thus may return a sequence of maps of generated keys).

insert-record takes a table name and a single map; uses insert-records and returns a map of generated key if the database supports that.

delete-rows takes a table name and a where clause. Perhaps delete-where might be a better name?

update-values takes a table name, a where clause and a map (of column/value pairs to update). Arguably this should be renamed since it doesn't take the same argument pattern as insert-values, but whilst update-records might sound more appropriate it only takes one "record" and update-record is clearly not right since it can update more than one row!

update-or-insert-values (taking the same arguments as update-values) can however lead to inserting a single record but it can also update multiple records instead.

On both of the latter functions, an analogy could be drawn with the findAndModify command in MongoDB which has an upsert option (to insert the document if no matches are found for the update), but most notably, findAndModify only updates a single record. I don't believe SQL has any portable equivalent but perhaps find-and-modify is a better name for update-values? Or perhaps, in the spirit of my suggestion for delete-where, we might call it update-where - and use an optional keyword argument to indicate that an insert-record should be performed when no rows match?

The only other comment I could make on update-values is that it might be nice to have a variant that took: table where-params column-names value-group - somewhat inline with insert-values (although not allowing any number of vectors of column values).

Comment by Sean Corfield [ 12/Oct/11 4:29 PM ]

The API aspects of this ticket are covered in JDBC-20 (post 0.1.0) and the struct-map issue is covered in JDBC-15 so I'm closing this out as a duplicate for now.

Comment by Sean Corfield [ 12/Oct/11 4:29 PM ]

Essentially duplicated with more focus in JDBC-15 and JDBC-20.





[JDBC-9] Solve duplicate column label issue Created: 10/Jun/11  Updated: 11/Oct/11  Resolved: 11/Oct/11

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Per Peter Siewart (and others):

– While working with clojure.java.jdbc, I kept running into the "ResultSet must have unique column labels" exception, so I rewrote it to automatically rename duplicate columns. Now, I can run "SELECT * FROM tbl1 JOIN tbl2 ON tbl1.id = tbl2.id" without causing any errors.

This would definitely be a nice enhancement but so far no one with a signed CA has offered a patch

Figuring out the appropriate renaming strategy is key - a sensible default with an optional function to allow users to specify how to do this? Or perhaps only renaming if users specify such a function?



 Comments   
Comment by Sean Corfield [ 11/Oct/11 4:12 PM ]

Applied a variant of Peter's patch and added unit tests.





[JDBC-16] do-prepared doesn't throw on invalid SQL Created: 07/Sep/11  Updated: 11/Oct/11  Resolved: 11/Oct/11

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Allen Rohner Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   
(jdbc/do-prepared "syntactically invalid SQL")

returns nil, while

(jdbc/do-commands "syntactically invalid SQL")
throws as expected.



 Comments   
Comment by Allen Rohner [ 07/Sep/11 4:00 PM ]

Actually, do-prepared doesn't seem to do anything at all. (jdbc/do-commands "insert into foo (foo,bar) VALUES (1,2)") works as expected, same command with do-prepared doesn't do anything.

Comment by Sean Corfield [ 11/Oct/11 2:12 PM ]

Ensure .addBatch is called even when there are no param-groups.





[JDBC-17] Type-hint public fns returning Connection Created: 10/Oct/11  Updated: 10/Oct/11  Resolved: 10/Oct/11

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Stuart Sierra Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File JDBC-17-a.patch    
Patch: Code

 Description   

clojure.java.jdbc/connection and find-connection both return a java.sql.Connection but are not type-hinted.

There is a type hint on clojure.java.jdbc.internal/connection*, but it does not carry through to the public Var.

Tagging these functions eliminates 9 reflection warnings in a project I have.



 Comments   
Comment by Sean Corfield [ 10/Oct/11 11:21 PM ]

Patch applied. Thanx Stuart!





[JDBC-14] delete-rows does not work with null WHERE Created: 06/Aug/11  Updated: 04/Sep/11  Resolved: 04/Sep/11

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Minor
Reporter: Allen Rohner Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None


 Description   

(delete-rows "foo" [])
generates the SQL
delete from foo where null
, which deletes 0 rows. Instead, I expect it to generate
delete from foo
.



 Comments   
Comment by Allen Rohner [ 06/Aug/11 3:46 PM ]

I can't find the button to edit the issue, this happened on ver. 0.0.6

Comment by Sean Corfield [ 10/Aug/11 1:46 AM ]

I actually think that would be rather dangerous. Imagine constructing the where parameters dynamically and accidentally passing in an empty sequence - you'd delete everything in the table.

I believe you can safely and deliberately get the effect you are looking for by doing either of these:

(delete-rows "foo" [true])
(delete-rows "foo" [1])
Comment by Sean Corfield [ 04/Sep/11 1:32 AM ]

Given the lack of response to my comment and the fact that there is a clear and explicit workaround, I'm resolving this, declined.





[JDBC-6] Add hook to allow operations on PreparedStatement prior to query execution Created: 26/May/11  Updated: 10/Aug/11  Resolved: 10/Aug/11

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

It would be very useful to be able to pass in a function argument to several c.j.j functions that would be called on the PreparedStatement before executing the query, for example, calling setFetchSize(). I'm not sure what the syntax should look like for this...



 Comments   
Comment by Sean Corfield [ 17/Jul/11 3:36 AM ]

I've started the work to reorganize the code to better expose the statement object for this sort of manipulation.

Comment by Sean Corfield [ 17/Jul/11 5:57 PM ]

Two options under consideration:

  • allow raw PreparedStatement to be passed in by user where appropriate so they have complete control
  • allow map of settings that would be used to create the PreparedStatement and set its options

I like the latter better but Alan Malloy made the point that the former may still be needed for total control anyway.

I'm leaning toward adding a function to ease creation of PreparedStatement objects to at least provide a clean API and then I'll consider how best to integrate that with the other functions.

Comment by Sean Corfield [ 17/Jul/11 10:02 PM ]

Added prepare-statement* to jdbc.internal which takes a connection, a SQL string and a map of options; returns a PreparedStatement. Will expose this in top-level.

with-query-results now allows the first element in the sql-params vector to be either:

  • PreparedStatement object created by the user
  • map of options passed to prepare-statement* in order to create the PreparedStatement

Options supported are in the docstring. May add additional post-creation options (beyond fetch-size and max-rows).

Comment by Sean Corfield [ 10/Aug/11 2:23 AM ]

Users have verified that being able to specify :fetch-size in with-query-results handles the common use cases for performance.





[JDBC-4] Global connection mechanism / connection pooling Created: 08/May/11  Updated: 10/Aug/11  Resolved: 10/Aug/11

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Nicolas Buduroi: What I would like to see added is a global connection mechanism like in ClojureQL and Lobos. We could even add a connection pool behind the scene.



 Comments   
Comment by Sean Corfield [ 09/Jul/11 12:23 PM ]

For production usage, I've been relying on c3p0 for connection pooling which works very well with c.j.j so I'm not sure that we need something specific within the library itself. What we do need, of course, is documentation on how to use something like c3p0 for connection pooling with c.j.j!

Comment by Sean Corfield [ 17/Jul/11 3:31 AM ]

Given the ease of using 3rd-party connection pooling libraries like c3p0, I think this is low priority (and I may simply close out after documenting how to use c3p0 with c.j.j).

Comment by Sean Corfield [ 10/Aug/11 2:21 AM ]

I've documented it here: https://github.com/clojure/java.jdbc/blob/master/doc/clojure/java/jdbc/ConnectionPooling.md

I don't believe this should be in c.j.j itself so I'm going to close this.

Comment by Sean Corfield [ 10/Aug/11 2:22 AM ]

Documented how to use a 3rd party library for connection pooling which I think is sufficient since it's so easy.





[JDBC-12] exceptions when trying to insert into apache derby Created: 24/Jul/11  Updated: 25/Jul/11  Resolved: 25/Jul/11

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Kevin Downey Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None

Waiting On: Sean Corfield

 Description   

java.sql.SQLException: Only executeBatch and clearBatch allowed in the middle of a batch.
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.newSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.ConnectionChild.newSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedStatement.checkIfInMiddleOfBatch(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeStatement(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeUpdate(Unknown Source)
at clojure.java.jdbc.internal$do_prepared_return_keys_STAR_$fn__84.invoke(internal.clj:272)
at clojure.java.jdbc.internal$transaction_STAR_.invoke(internal.clj:185)
at clojure.java.jdbc.internal$do_prepared_return_keys_STAR_.doInvoke(internal.clj:272)
at clojure.lang.RestFn.applyTo(RestFn.java:140)
at clojure.core$apply.invoke(core.clj:542)
at clojure.java.jdbc$insert_values.doInvoke(jdbc.clj:259)
at clojure.lang.RestFn.invoke(RestFn.java:443)
at hiredman.triples$store_triple$fn_471$fn_472.invoke(triples.clj:39)
at clojure.java.jdbc.internal$transaction_STAR_.invoke(internal.clj:174)
at hiredman.triples$store_triple$fn__471.invoke(triples.clj:39)
at clojure.java.jdbc.internal$with_connection_STAR_.invoke(internal.clj:156)
at hiredman.triples$store_triple.invoke(triples.clj:39)
at hiredman.clojurebot.factoids$fn__2689.invoke(factoids.clj:139)
at clojure.lang.MultiFn.invoke(MultiFn.java:167)
at hiredman.clojurebot.factoids$factoid_command_run.invoke(factoids.clj:173)
at conduit.core$fn_284$a_arr285$a_arr_286.invoke(core.clj:233)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$a_except$a_except__307.invoke(core.clj:350)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at clojurebot.conduit$a_indirect$this_fn__500.invoke(conduit.clj:9)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.irc$reply_fn$irc_reply_fn__350.invoke(irc.clj:22)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.irc$irc_run$handle_msg__421.invoke(irc.clj:166)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$a_run.invoke(core.clj:28)
at conduit.irc$irc_run$run__424.invoke(irc.clj:175)
at conduit.irc$irc_run.doInvoke(irc.clj:179)
at clojure.lang.RestFn.applyTo(RestFn.java:140)
at clojure.core$apply.invoke(core.clj:542)
at clojurebot.core$main$fn_4654.invoke(core.clj:214)
at clojure.lang.AFn.call(AFn.java:18)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.sql.SQLException: Only executeBatch and clearBatch allowed in the middle of a batch.
at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown Source)
... 185 more

replacing .executeUpdate with .executeBatch in do-prepared-return-keys* seems to fix the problem and be what was meant in the first place, the result is bound to 'counts' and .executeUpdate returns a single count while .executeBatch returns multiple



 Comments   
Comment by Sean Corfield [ 24/Jul/11 3:38 PM ]

Unfortunately .executeBatch had to be changed to .executeUpdate in order to solve JDBC-10:

https://github.com/clojure/java.jdbc/commit/9240d66c9124454187416861a0e2021952eb4f40

MS SQL Server and PostgreSQL both fail on .executeUpdate in that call position.

Clearly we need as many of these DBs up and running for automated testing as possible before I can attempt to fix this so that I don't accidentally break something else.

Comment by Sean Corfield [ 24/Jul/11 5:26 PM ]

I now have some unit tests for MySQL, HSQLDB and Derby. insert-records fails with the exception reported here, only for Derby. I'll complete the suite of tests, try to get PostgreSQL and MS SQL Server tests running as well and then I'll be in a good position to fix things...

Comment by Sean Corfield [ 25/Jul/11 4:52 PM ]

The culprit was .addBatch which has now been removed and do-prepared-return-keys now explicitly only has one param-group argument.





[JDBC-8] Review (and fix) reflection warnings Created: 08/Jun/11  Updated: 17/Jul/11  Resolved: 17/Jul/11

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

There are quite a few reflection warnings in the c.j.j code which should be reviewed and performance critical pieces type hinted.



 Comments   
Comment by Allen Rohner [ 09/Jul/11 3:50 PM ]

Empirically, I can tell you the one in do-prepared*, (.prepareStatement (connection), and the one in transaction* (let [con (connection*) ...) are both performance critical for me.

Comment by Sean Corfield [ 09/Jul/11 4:00 PM ]

Thanx Allen. The impression I get from clojure-dev is new contrib libraries should have zero reflection warnings so I plan to find and fix all such cases in c.j.j.

Comment by Sean Corfield [ 17/Jul/11 5:47 PM ]

All reflection warnings have been removed.





[JDBC-11] Transaction not rolled back on Throwable exception Created: 03/Jul/11  Updated: 17/Jul/11  Resolved: 17/Jul/11

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 1
Labels: None

Attachments: Text File 0001-Don-t-commit-transaction-in-case-of-exceptions.patch     Text File test-mechanism.patch    

 Description   

(transaction
(insert-values .....)
(throw (Throwable. "baaaaad")))

Reported by Sebastián Galkin on github.

Transactions rollback on Exception but not Throwable.



 Comments   
Comment by Sebastián Bernardo Galkin [ 03/Jul/11 11:07 PM ]

I think the correct behavior would be to call rollback on the transaction and not just not calling commit? The user will expect at the end of the transaction block one of commit or rollback to be called.

I was thinking about how to test this. How do you feel about a mock Connection object, where we can check if commit and/or rollback have been called as part of the test?

The same mechanism could be used to test other features.

Comment by Sebastián Bernardo Galkin [ 04/Jul/11 12:23 AM ]

I was thinking in something like the attached for testing the issue (and other functionality).
It needs several improvements, but it's enough to get the idea.
What do you think?

Comment by Sean Corfield [ 04/Jul/11 12:27 AM ]

FYI, here's the IRC discussion about this, for the record:

[5:11pm] <paraseba>
In clojure.java.jdbc there is a transaction function, which is supposed to rollback if the passed body throws an exception, but it only does that in case of an Exception, not any Throwable. What's the rationale behind this?
[5:11pm] <paraseba>
I'd want my transaction rolledback in case of any errors
[5:12pm] <paraseba>
seancorfield: maybe you can explain me this?
[5:53pm] <amalloy>
paraseba: catching Errors is generally a bad practice. i'm not saying it's wrong all the time, but Errors are often unrecoverable anyway
[5:53pm] <amalloy>
eg, "You ran out of heap space! I can't even allocate memory to throw an exception!"
[5:54pm] <paraseba>
but, even in the worst conditions, shouldn't we try to rollback the transaction? is not better that commiting in this unexpected error condition?
[5:55pm] <paraseba>
we can then rethrow the Throwable, after trying to rollback
[5:55pm] <lucian>
paraseba: i don't think the db will commit if it gives you an error
[5:55pm] <lucian>
and as amalloy, errors like OOM are best solved with exit()
[5:55pm] <paraseba>
lucian: it will ... jdbc is catching Exception and rolling back in that case .... but it commits in a finally
[5:56pm] <paraseba>
so, if you have an Error thrown, it will commit
[5:56pm] <paraseba>
I guess that's more surprising than a rollback
[5:57pm] <paraseba>
the logic is .... (try do-your-thing (catch Exception roll-back) (finally commit))
[5:57pm] <lucian>
paraseba: well, then maybe you don't want to commit in finally?
[5:57pm] <paraseba>
I don't, not if I got an Error
[5:58pm] <amalloy>
lucian: i think he's upset that a library he's using is committing on error
[5:59pm] <paraseba>
amalloy: I can solve it easily by wrapping my operations in a catch Throwable -> rollback -> rethrow, but I think it's not the right behavior for the library
[5:59pm] <paraseba>
I would expect a commit only if the block ended without any kind of exceptions or errors. don't you agree ?
[6:01pm] <amalloy>
paraseba: meh. all kinds of weird stuff can happen if an Error occurs; i wouldn't be entirely certain that an attempt to "recover" makes things worse due to some weird program state caused by the Error. i mean, my completely-unresearched opinion is that catching Throwable would be better, but you can't really rely for your program's correctness on anything that happens after an Error
[6:02pm] <paraseba>
but, we are forcing a commit after an Error
[6:04pm] <paraseba>
the usual logic should be .... (do do-your-thing (commit)) if do-your-thing throws anything ... no commit is done. Puting a commit in a finally enforces the commit, even after Errors
[6:06pm] <amalloy>
yeah, i think i agree
[6:08pm] <paraseba>
amalloy: ok, I'll report an issue, thanks

Comment by Sebastián Bernardo Galkin [ 09/Jul/11 2:36 PM ]

It's important to realize that there are 2 problems here.

a.- Transaction not rolled back on Throwable
b.- The original Exception is "swallowed" and a new, different one is thrown

I think both are equally serious.

Comment by Sean Corfield [ 09/Jul/11 3:57 PM ]

I agree Sebastian. I've raised this issue for input from clojure-dev (looking for stylistic guidelines for contrib libraries) because I think both problems need to be fixed.

My current leaning is toward supporting three paths:

  • success: rollback transaction if user code sets rollback true, else commit
  • Exception: rollback transaction and rethrow (without wrapping)
  • Throwable: allow it to escape, take no action
Comment by Sebastián Bernardo Galkin [ 12/Jul/11 11:05 AM ]

I'm attaching a patch that fixes the problem without adding any tests.

Comment by Sean Corfield [ 17/Jul/11 3:28 AM ]

On success, commit unless rollback has been set (in which case rollback).
On Exception, rollback and rethrow as-is.
On Error (Throwable), allow it to escape.
In all cases, set rollback false and restore auto-commit setting.

The only two changes here should be:

  • Exception is no longer wrapped when thrown (may break code that expected the wrapping?)
  • No commit attempted on Error (Throwable)




[JDBC-5] Adding stropping and qualified identifiers to java.jdbc Created: 24/Apr/11  Updated: 09/Jul/11  Resolved: 09/Jul/11

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Nicolas Buduroi Assignee: Nicolas Buduroi
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File 0001-Added-the-with-stropping-macro-and-the-as-identifier.patch    
Patch: Code
Waiting On: Sean Corfield

 Description   

Following the following discussion:

https://groups.google.com/d/topic/clojure/j7fXdkFMrec/discussion

I've implemented a simple stropping feature that also provide a way to supply qualified identifiers and a function (as-identifier) that can be reused in user code. The implementation can be found there:

https://github.com/budu/java.jdbc/commit/0822d323cd4d267533a88241be68cb184ff9bf28

It's also attached to this issue.

Here's some example of how that feature could be used:

user> (create-table :foo [:bar :integer] [:baz :text])
"CREATE TABLE foo (bar integer, baz text)"
user> (create-table [:schema :foo] [:bar :integer] [:baz :text])
"CREATE TABLE schema.foo (bar integer, baz text)"
user> (with-stropping \" identity (create-table [:schema :foo] [:bar :integer] [:baz :text]))
"CREATE TABLE \"schema\".\"foo\" (\"bar\" integer, \"baz\" text)"
user> (with-stropping [[ ]] identity (create-table [:schema :foo] [:bar :integer] [:baz :text]))
"CREATE TABLE [schema].[foo] ([bar] integer, [baz] text)"



 Comments   
Comment by Sean Corfield [ 24/Apr/11 8:21 PM ]

Duplicate of https://github.com/clojure/java.jdbc/issues/7 - thanx for the patch!

Comment by Sean Corfield [ 09/Jul/11 2:28 PM ]

with-naming-strategy, with-quoted-identifiers, as-quoted-str address this.





[JDBC-2] create-table should allow table-spec Created: 08/May/11  Updated: 09/Jul/11  Resolved: 09/Jul/11

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

create-table takes a name and a seq of column-specs but does not allow any table-spec to follow that so you cannot, for example, the engine type / charset / etc.



 Comments   
Comment by Sean Corfield [ 09/Jul/11 2:18 PM ]

create-table now allows optional :table-spec "table level attributes" at the end of the specs.





[JDBC-10] Attempts to return generated keys on batch inserts fails on PostgreSQL and MS SQL Server Created: 18/Jun/11  Updated: 27/Jun/11  Resolved: 27/Jun/11

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

From andreasmk2:

"JDBC driver for PostgreSQL and ms-sql does not support returning of generated keys from statements executed in batch. The SQL exception was thrown when 'do-prepared*' was called with 'return-keys' set to true. Currently only the function 'insert-values' calls a new method 'do-single-prepared-get-keys*' when single relation is inserted."



 Comments   
Comment by Sean Corfield [ 27/Jun/11 5:54 PM ]

I just committed a potential fix for this. It's available in the Sonatype snapshots repo.





[JDBC-7] Clojure 1.2 compatibility Created: 06/Jun/11  Updated: 08/Jun/11  Resolved: 08/Jun/11

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Blocker
Reporter: Sean Corfield Assignee: Aaron Bedra
Resolution: Completed Votes: 0
Labels: None


 Description   

Per Aaron Bedra: "I just tried to use this on a 1.2 project that we have been working on for a while now and the dynamic defs are killing it. Can I change it to use the older ^{:dynamic true} syntax and release a new version? Is that the only thing that is keeping this off of 1.2? For now we need to make sure the new contrib libraries are 1.2 compatible, so people can start transitioning now."






Generated at Fri Jul 25 01:52:33 CDT 2014 using JIRA 4.4#649-r158309.