<< Back to previous view

[JDBC-144] Passes options all the way down in key CRUD fns Created: 21/Sep/16  Updated: 21/Sep/16

Status: Open
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Isaac Sanders Assignee: Sean Corfield
Resolution: Unresolved Votes: 0
Labels: None

Attachments: Text File 0001-Passes-options-all-the-way-down-in-key-CRUD-fns.patch    
Patch: Code

 Description   

Specifically `insert!`, `update!`, and `delete!`.

This is terribly inconvenient to not be able to pass options down, especially when it may allow one to tune their use of the library better.



 Comments   
Comment by Sean Corfield [ 21/Sep/16 11:33 AM ]

For reference: https://github.com/clojure/java.jdbc/pull/45

Isaac, thanks for creating the JIRA issue for tracking. Do you think you'll sign the CA etc? If not, that's fine, but I won't be able to touch the patch. As noted on the PR, I'll just use the changes in the PR as a checklist of places to review in the code.

Comment by Isaac Sanders [ 21/Sep/16 12:58 PM ]

I don't think I will be able to sign the CA for this contribution. I would appreciate it if you were able to review and make changes in the spirit of the PR.

– Isaac

Comment by Sean Corfield [ 21/Sep/16 1:32 PM ]

No problem, just wanted to check with you first, before deciding what path to take.





[JDBC-143] Make it easier to test java.jdbc in other environments Created: 17/Sep/16  Updated: 17/Sep/16

Status: Open
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Make the versions properties in pom.xml and/or project.clj, make the database names, usernames, and passwords configurable via environment variables.

This came up via a suggestion from the PostgreSQL community – see this pull request for guidance on what we could open up: https://github.com/clojure/java.jdbc/pull/44/files






[JDBC-142] Incorrect insert! usage in README Created: 02/Sep/16  Updated: 02/Sep/16  Resolved: 02/Sep/16

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Trivial
Reporter: Julien Eluard Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

README.md incorrectly uses insert! in the Example Usage section.

It should be:

(j/insert-multi! mysql-db :fruit
  [{:name "Apple" :appearance "rosy" :cost 24}
   {:name "Orange" :appearance "round" :cost 49}])


 Comments   
Comment by Sean Corfield [ 02/Sep/16 1:20 AM ]

Good catch, thank you!

Comment by Sean Corfield [ 02/Sep/16 1:24 AM ]

README updated.





[JDBC-110] Requesting a nested transaction with a more restrictive isolation level appears to succeed, but does not have the requested effect Created: 22/Jul/15  Updated: 01/Sep/16  Resolved: 27/Jul/15

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Donald Ball Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File check-transaction-isolation-level.patch    
Patch: Code and Test

 Description   

The library helpfully allows nested transactions by maintaining an internal depth counter, but silently disregards any transaction options for nested transactions. This has actually bitten me recently in production; code that required a serializable transaction in order to maintain an invariant was inadvertently being executed within a repeatable-read transaction.

This patch changes the behavior to raise if a nested transaction requests a different isolation level in which the actual transaction is running.

A similar problem exists for nested transactions that request read-only mode. I will be happy to modify the patch to cover that case if you are interested in accepting this.



 Comments   
Comment by Sean Corfield [ 22/Jul/15 2:18 PM ]

Thanks Donald. I'll take a look at this.

You don't appear to be listed here: http://clojure.org/contributing – have you signed and submitted the Contributor's Agreement?

Comment by Donald Ball [ 23/Jul/15 1:34 PM ]

Not yet, but I will do so forthwith.

Comment by Donald Ball [ 23/Jul/15 1:37 PM ]

Okay, I'm legal.

Comment by Sean Corfield [ 23/Jul/15 1:38 PM ]

Great! I'll try to take a look at this over the weekend.

Comment by Sean Corfield [ 27/Jul/15 12:46 AM ]

Thanks for the patch Donald!

Comment by Donald Ball [ 27/Jul/15 8:38 AM ]

Would you be amenable to also raising if the inner transaction requests an incompatible read-only mode?

Comment by Sean Corfield [ 27/Jul/15 12:03 PM ]

Sure. Do you want to create a new ticket and attach a patch to it?

Comment by Sean Corfield [ 30/Aug/16 11:15 AM ]

Donald Ball This issue came up again recently (as a request to allow the old behavior), and I wondered if you ever created the issue for the incompatible read-only mode check? I don't see it...

Comment by Donald Ball [ 30/Aug/16 1:58 PM ]

I never did get around to it, no sorry! Would you like it, or are you reconsidering the change in general? (I still feel the new behavior is more correct, but am curious if there's a use case I failed to consider.)

Comment by Sean Corfield [ 01/Sep/16 10:45 PM ]

I'd be happy for you to create an issue for that (and offer a patch if you get the time).

To partially address the request from the (java.jdbc) mailing list, I added a get-isolation-level function so folks can at least tell if they're inside a transaction and what that current isolation level is (if known).





[JDBC-137] make 'result-set-seq' accept customized result-set-read-column to support multi-database environment Created: 07/Aug/16  Updated: 31/Aug/16  Resolved: 31/Aug/16

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Zhou Xiangtao Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Using java.jdbc with postgres composite type, the common way is extend the IResultSetReadColumn protocol. When there are multiple databases in use, every database should specify it's own column reader.
Add option to 'result-set-seq' support custom function to replace IResultSetReadColumn may be a solution for this situation.



 Comments   
Comment by Sean Corfield [ 12/Aug/16 10:03 PM ]

The closest parallel with setting parameters would be to have a :read-columns option (to result-set-seq and upstream in several calling functions, as well as a per-database default in the db-spec itself).

Like :set-parameters, this :read-columns function would be expected to map over the metadata itself – it would be passed the result set object and the result set metadata object, and the default implementation would map over (range 1 (inc (.getColumnCount rsmeta))) and call (.getObject rs idx) and then convert that to Clojure based on the corresponding column in rsmeta.

That feels like a lot of "heavy lifting" but it's what folks have to do if they need per-database set-parameters behavior and it feels like the right approach (given a result set and its metadata, construct an entire row).

Would that solve your problem sufficiently?

Comment by Zhou Xiangtao [ 28/Aug/16 7:13 AM ]

@Sean Corfield

Yes, it would solve the problem. Is the result metadata will make the query execute twice? one for analysis metadata, another for execute the query with data. If so, the result metadata query operation take palpable time or not ? or use result set over result set metadata ?

Comment by Sean Corfield [ 31/Aug/16 7:31 PM ]

The result-set-seq function already has both the metadata and the column data on hand – see this section of the code:

https://github.com/clojure/java.jdbc/blob/master/src/main/clojure/clojure/java/jdbc.clj#L405-L415

Currently the protocol supports a function that is passed the Object (column value), the metadata, and the 1-based column index (into the metadata). I'm suggesting a function that would be passed rs (the result set itself), rsmeta (the metadata), and idxs (the indices), and would return the row values. In the code above, row-values would change to something like:

row-values (fn [] (read-columns rs rsmeta))

And the default behavior would be:

(defn default-read-columns [rs rsmeta idxs]
  (map (fn [^Integer i] (result-set-read-column (.getObject rs i) rsmeta i)) idxs))

(this is slightly simpler than my comment above suggested a few weeks back).

Comment by Sean Corfield [ 31/Aug/16 8:26 PM ]

Will be in next release. An interim 0.6.2-SNAPSHOT release should be available on Sonatype shortly for testing.





[JDBC-141] Add get-isolation-level to make it easier to work with nested transactions Created: 31/Aug/16  Updated: 31/Aug/16  Resolved: 31/Aug/16

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Should return the isolation level (as a keyword) for the current connection, if known.






[JDBC-138] postgres subprotocol does not auto-detect driver classname Created: 15/Aug/16  Updated: 25/Aug/16  Resolved: 15/Aug/16

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

This is because subprotocols is not called for that case in get-connection.



 Comments   
Comment by Sean Corfield [ 15/Aug/16 7:49 PM ]

This was due to the subprotocols alias lookup not being done. Fixed. Plus some new aliases added for convenience.

Comment by Sean Corfield [ 25/Aug/16 2:45 PM ]

Release 0.6.2-alpha3.





[JDBC-139] Update docs to prefer dbtype / dbname style of db-spec Created: 15/Aug/16  Updated: 25/Aug/16  Resolved: 15/Aug/16

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

This is by far the easiest way to write a db-spec but it is poorly documented.

Update the get-connection docstring to prefer this. Update the README. Update all examples in the repo. Update the clojure-doc.org documentation as well.



 Comments   
Comment by Sean Corfield [ 15/Aug/16 7:47 PM ]

README updated; get-connection docstring updated. I'll update the java.jdbc Guides pages shortly.

Comment by Sean Corfield [ 15/Aug/16 7:49 PM ]

See also https://github.com/clojuredocs/guides/issues/229

Comment by Sean Corfield [ 25/Aug/16 2:45 PM ]

Release 0.6.2-alpha3.





[JDBC-140] Both :identifiers and :entities should be ident? -> ident? functions Created: 24/Aug/16  Updated: 25/Aug/16  Resolved: 25/Aug/16

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Although originally intended to both be String -> String, the docs are not clear and, up until :qualifiers was added, String -> Keyword worked for :identifiers. @Bronsa pointed this out https://github.com/clojure/java.jdbc/commit/6934d169cc2b4140138ccad8018c858e5c51ba8e#commitcomment-18758345 and the code should be updated so both work. This should be reflected in the new jdbc.spec namespace as well (currently the types of those functions is not specified).



 Comments   
Comment by Nicola Mometto [ 25/Aug/16 12:30 PM ]

Two cases that the `qualifier` change has broken:
1-

(jdbc/query conn ["SELECT 1 AS \"foo/bar\""] {:result-set-fn first})

the key for this map is now a keyword with namespace = `nil` and name = `foo/bar` while previously it used to be a keyword with namespace `foo` and name = `bar`

2-

(jdbc/query conn ["SELECT 1 AS x"] {:identifiers keyword})

this used to return correctly but now fails because `(keyword nil :x)` throws an exception while `(keyword :x)` works as expected.

Comment by Sean Corfield [ 25/Aug/16 2:40 PM ]

Fixed in 0.6.2-alpha3 (coming soon).

Comment by Sean Corfield [ 25/Aug/16 2:45 PM ]

Release 0.6.2-alpha3.





[JDBC-134] postgres insert results should respect `indentifiers` option Created: 22/Jun/16  Updated: 04/Aug/16  Resolved: 22/Jun/16

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Ryan Fowler Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None
Environment:

postgres


Attachments: Text File 0001-test-if-insert-results-respect-identifiers.patch    

 Description   

When `(insert-multi!)` is called against a postgres database with row maps, the inserted row is returned but there's not an `identifiers` option to change the field names into the application's normal key format.

I attached a failing test. If you want to provide a little direction, I'd be willing to work on a fix as well.



 Comments   
Comment by Sean Corfield [ 22/Jun/16 11:55 AM ]

Good catch. Yes, Postgres is the only DB that returns rows, rather than just numbers, and I only ran across that recently so there's never been a need for :identifiers on insertion. Does this apply only to insert-multi! or does it affect insert! too?

Comment by Ryan Fowler [ 22/Jun/16 2:27 PM ]

It affects both. Both return the result of insert-cols! without modification.

Comment by Sean Corfield [ 22/Jun/16 6:24 PM ]

Thanks for confirmation!

Comment by Sean Corfield [ 22/Jun/16 10:09 PM ]

Will be in 0.6.2. Also made sure the tests exercise :qualifier with inserts.

Comment by Ryan Fowler [ 23/Jun/16 9:16 AM ]

Awesome, Thanks!

Comment by Sean Corfield [ 05/Jul/16 7:16 PM ]

Releasing 0.6.2-alpha1 with this change.

Comment by Andrea Richiardi [ 04/Aug/16 12:01 PM ]

I confirm this works in 0.6.2-alpha2 and it's faster then applying `:row-fn`





[JDBC-133] Provide optional :qualifier option to better support namespaced keywords as column identifiers Created: 14/Jun/16  Updated: 05/Jul/16  Resolved: 14/Jun/16

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Currently, you can pass an :identifiers function in options to a number of functions and java.jdbc calls (keyword (identifiers col-name)) to convert SQL column names into Clojure map keys. The default for :identifiers is clojure.string/lower-case.

With the increased focus on namespaced keywords in Clojure 1.9.0 (with the introduction of clojure.spec), it would be nice to have an easy way to tell these functions to produce qualified keywords. You can do that right now by specifying something like (comp #(str "foo/" %) str/lower-case) but that's ugly!

Suggestion: add optional :qualifier option that is used as the first argument in the keyword call, which defaults to nil.



 Comments   
Comment by Sean Corfield [ 14/Jun/16 10:08 PM ]

Will be in 0.6.2

Comment by Sean Corfield [ 05/Jul/16 7:16 PM ]

Releasing 0.6.2-alpha1 with this change.





[JDBC-135] Add some sort of explain option Created: 28/Jun/16  Updated: 05/Jul/16  Resolved: 03/Jul/16

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

EXPLAIN is very useful for debugging SQL performance issues but it isn't very portable (EXPLAIN PLAN in Oracle, EXPLAIN EXTENDED in MySQL... SQL Server seems to only support command options before/after SQL statements?).

Is there some bare minimum we can provide, such that if an :explain? option is provided, we can persuade the underlying system to provide the execution plan (as well as potentially executing the query in question).

Current thinking:

  • :explain? - truthy enables explain processing, default is just "EXPLAIN " prepended to the SQL statement, but if :explain? is a string, that value would be prepended instead,
  • :explain-fn - post-processing for the explain data returned by the query (if any), the default would just be println.


 Comments   
Comment by Sean Corfield [ 03/Jul/16 6:01 PM ]

The above is basic and works, insofar as printing the execution plan to the console and not interrupting the flow of code. :explain-fn could be a logging function of some sort.

HSQLDB requires EXPLAIN PLAN FOR so those users would need to provide the string explicitly. The default string works for Postgres and MySQL (as well as H2 and SQLite, although the latter produces something that looks like assembler!). Apache Derby seems to ignore an EXPLAIN statement. MS SQL Server doesn't seem to have a SQL-level option for this (really? I tried EXPLAIN and DESCRIBE but neither worked).

I can imagine situations where being able to get the plan back as data might be useful for some databases but for now the printing / logging seems sufficient.

Comment by Sean Corfield [ 03/Jul/16 6:11 PM ]

Will be in 0.6.2

Comment by Sean Corfield [ 05/Jul/16 7:16 PM ]

Releasing 0.6.2-alpha1 with this change.





[JDBC-136] Provide global options in the db-spec map Created: 28/Jun/16  Updated: 05/Jul/16  Resolved: 02/Jul/16

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Now that options processing is much more streamlined, it's worth taking another look at whether it is reasonable to support global default options in the db-spec map itself (assuming it is a map). This would be particularly valuable for :identifiers, :qualifier, and :entities.



 Comments   
Comment by Sean Corfield [ 02/Jul/16 8:55 PM ]

This would be pretty straightforward based on adopting an idiom, in each function where both db and options can be passed, like this:

(defn ...
  [... opts]
  (let [{:keys [useful options]} (merge default-opts (when (map? db) db) opts] ..))

It will touch a lot of code tho'...

Comment by Sean Corfield [ 02/Jul/16 9:44 PM ]

Will be 0.6.2.

Comment by Sean Corfield [ 05/Jul/16 7:16 PM ]

Releasing 0.6.2-alpha1 with this change.





[JDBC-131] db-transaction* doesn't handle .setAutoCommit throwing exception Created: 31/May/16  Updated: 01/Jun/16  Resolved: 01/Jun/16

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Sam Roberton Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

db-transaction* executes the function it is given within a (try ... (finally ...)), and within finally it calls (.setAutoCommit con auto-commit) as well as (possibly) .setTransactionIsolation and .setReadOnly, in order to restore the connection's original values for those attributes.

Those three methods are documented as being allowed to throw SQLException – including if the connection is closed, which it may now be precisely because of an exception thrown by the function in the (try ... (finally ...)). jTDS, at least, does throw an exception out of (.setAutoCommit ...) when the original exception has caused the connection to be closed. When this happens, the original exception is swallowed in favour of the one thrown by the (.setAutoCommit ...).

The calls within the finally should probably themselves be wrapped within a (try ... (finally ...)), and discarded in favour of allowing the original exception to propagate if the (.setAutoCommit ...) (or other setter invocation) fails.



 Comments   
Comment by Sean Corfield [ 31/May/16 1:05 AM ]

Thanks Sam. Do you happen to have repro cases of any of these, which could be added to the test cases?

JDBC seems to be full of unpleasant edge cases

Comment by Sam Roberton [ 31/May/16 2:04 AM ]

I did just manage to reproduce it with the following (using with-db-transaction rather than directly calling db-transaction*, just for convenience):

(clojure.java.jdbc/with-db-transaction [tx {:classname "net.sourceforge.jtds.jdbc.Driver"
                                            :subprotocol "jtds:sqlserver"
                                            :subname "//<DB-HOST>/<DB-NAME>;socketTimeout=10"
                                            :user "<DB-USER>"
                                            :password "<DB-PASSWORD>"}]
  (clojure.java.jdbc/query tx ["waitfor delay '00:00:15'"]))

The exception I get in CIDER is:

Unhandled java.sql.SQLException
   Invalid state, the Connection object is closed.

      ConnectionJDBC2.java: 1713  net.sourceforge.jtds.jdbc.ConnectionJDBC2/checkOpen
      ConnectionJDBC2.java: 2223  net.sourceforge.jtds.jdbc.ConnectionJDBC2/setAutoCommit
                  jdbc.clj:  612  clojure.java.jdbc/db-transaction*
                  jdbc.clj:  618  clojure.java.jdbc/db-transaction*
                  jdbc.clj:  587  clojure.java.jdbc/db-transaction*
                      REPL:  118  user/eval76047
             Compiler.java: 6782  clojure.lang.Compiler/eval
             Compiler.java: 6745  clojure.lang.Compiler/eval
                  core.clj: 3081  clojure.core/eval
                  main.clj:  240  clojure.main/repl/read-eval-print/fn
                  main.clj:  240  clojure.main/repl/read-eval-print
                  main.clj:  258  clojure.main/repl/fn
                  main.clj:  258  clojure.main/repl
               RestFn.java: 1523  clojure.lang.RestFn/invoke
    interruptible_eval.clj:   87  clojure.tools.nrepl.middleware.interruptible-eval/evaluate/fn
                  AFn.java:  152  clojure.lang.AFn/applyToHelper
                  AFn.java:  144  clojure.lang.AFn/applyTo
                  core.clj:  630  clojure.core/apply
                  core.clj: 1868  clojure.core/with-bindings*
               RestFn.java:  425  clojure.lang.RestFn/invoke
    interruptible_eval.clj:   85  clojure.tools.nrepl.middleware.interruptible-eval/evaluate
    interruptible_eval.clj:  222  clojure.tools.nrepl.middleware.interruptible-eval/interruptible-eval/fn/fn
    interruptible_eval.clj:  190  clojure.tools.nrepl.middleware.interruptible-eval/run-next/fn
                  AFn.java:   22  clojure.lang.AFn/run
   ThreadPoolExecutor.java: 1142  java.util.concurrent.ThreadPoolExecutor/runWorker
   ThreadPoolExecutor.java:  617  java.util.concurrent.ThreadPoolExecutor$Worker/run
               Thread.java:  745  java.lang.Thread/run

If you call the exact same code, but with with-db-connection instead of with-db-transaction, then the exception you get is a java.net.SocketTimeoutException, which is the behaviour I would expect out of with-db-transaction as well.

I'm not at all familiar with the test cases for clojure.java.jdbc, so I'm not sure whether the above goes any way to helping out get automated coverage for this.

And yes – correct/safe error handling in JDBC is always a real nightmare. I suppose at least in Clojure we have a hope of abstracting that away reasonably, rather than the horrible copy/paste mess that it's so easy for equivalent Java projects to end up wallowing in!

Comment by Sean Corfield [ 31/May/16 12:34 PM ]

That's awesome Sam – thank you!

Comment by Sean Corfield [ 01/Jun/16 12:51 PM ]

Ignored exceptions from attempts to restore connection state (since those indicate the connection has gone bad anyway).

Comment by Sean Corfield [ 01/Jun/16 12:52 PM ]

This will be in either 0.6.2 or 0.7.0 depending on the outcome of discussions around using clojure.spec in contrib libraries.





[JDBC-6] Add hook to allow operations on PreparedStatement prior to query execution Created: 26/May/11  Updated: 01/Jun/16  Resolved: 10/Aug/11

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

It would be very useful to be able to pass in a function argument to several c.j.j functions that would be called on the PreparedStatement before executing the query, for example, calling setFetchSize(). I'm not sure what the syntax should look like for this...



 Comments   
Comment by Sean Corfield [ 17/Jul/11 3:36 AM ]

I've started the work to reorganize the code to better expose the statement object for this sort of manipulation.

Comment by Sean Corfield [ 17/Jul/11 5:57 PM ]

Two options under consideration:

  • allow raw PreparedStatement to be passed in by user where appropriate so they have complete control
  • allow map of settings that would be used to create the PreparedStatement and set its options

I like the latter better but Alan Malloy made the point that the former may still be needed for total control anyway.

I'm leaning toward adding a function to ease creation of PreparedStatement objects to at least provide a clean API and then I'll consider how best to integrate that with the other functions.

Comment by Sean Corfield [ 17/Jul/11 10:02 PM ]

Added prepare-statement* to jdbc.internal which takes a connection, a SQL string and a map of options; returns a PreparedStatement. Will expose this in top-level.

with-query-results now allows the first element in the sql-params vector to be either:

  • PreparedStatement object created by the user
  • map of options passed to prepare-statement* in order to create the PreparedStatement

Options supported are in the docstring. May add additional post-creation options (beyond fetch-size and max-rows).

Comment by Sean Corfield [ 10/Aug/11 2:23 AM ]

Users have verified that being able to specify :fetch-size in with-query-results handles the common use cases for performance.





[JDBC-50] insert-rows (incorrectly) patches incomplete records Created: 21/Mar/13  Updated: 01/Jun/16  Resolved: 06/Apr/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Cees van Kemenade Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: bug, docs
Environment:

Postgres database on Debian machine


Attachments: File jdbcIssue.clj    

 Description   

When using sql/insert-rows with a dataset that contains incomplete rows the rows will be patched with values from the preceding rows. The behaviour to be expected is not documented.

I would expect insert-rows either only accepts complete rows. When accepting incomplete rows I would expect patching with nil-values instead of values from other rows.

Attached you find an example and the output when running in comments.



 Comments   
Comment by Sean Corfield [ 06/Apr/13 3:10 PM ]

This looks like an old / long-standing bug that no one has run into before.

In the new (soon-to-be 0.3.0) release, (clojure.java.jdbc.sql/insert :table [1 1] [2] []) would throw an exception:

IllegalArgumentException insert called with inconsistent number of columns / values clojure.java.jdbc.sql/insert-multi-row (sql.clj:116)

I will update clojure.java.jdbc/insert-rows to throw a similar exception.

Comment by Sean Corfield [ 06/Apr/13 3:38 PM ]

Fixed in 0.3.0-SNAPSHOT by throwing an exception if the value-groups are not all the same length.

Currently relies on the DB to trap column names not matching value groups (which is known not to work on SQLite but does work on other DBs as far as I know).

insert-rows is deprecated in 0.3.0 - use insert! instead going forward (which does more validation).





[JDBC-124] execute! (and others?) does not force sql-params to be a destructurable sequence Created: 18/Apr/16  Updated: 01/Jun/16  Resolved: 18/Apr/16

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

This showed up in mixed language code that passed something array-like to Clojure that failed to destructure with {{[[sql & params]]}}. The same interaction works for query because it explicitly converts its sql-params argument to a vector prior to destructuring when it tests whether it has a sql-stmt?. The other functions should do the same thing using vector / vec instead of just wrapping one case in [ ].



 Comments   
Comment by Sean Corfield [ 18/Apr/16 7:57 PM ]

In release 0.6.0-alpha2





[JDBC-127] Fix PostgreSQL tests Created: 12/May/16  Updated: 01/Jun/16  Resolved: 12/May/16

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Juho Teperi Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None
Environment:

Postgres 9.4.7


Attachments: Text File jdbc-127-1.patch    

 Description   

Currently tests fail on Postgres. There are several reasons for this.

1. Clean-up fixture is broken because it tries multiple drop tables in a transaction. When a first drop table fails, due to missing table, the transaction is aborted. I doubt other DBs even support transactional DDL so I think clean-up could do without a transaction.

2. Multiple tests check the return value of db-do-prepared-return-keys. In Postgres this returns the complete rows instead of just ids. Tests should map results to retrieve just ids when using Postgres.

3. insert-one-col-val and insert-one-col-val-opts presume that Postgres returns the complete rows so it maps the results to retrieve just ids. For some reason in this case insert! only returns ids in Postgres. Probably something to do with this certain arity of insert!. I presume tests are correct but the implementation is broken.

I have some code ready to fix 1 and 2. If my assessments of the problems and solutions sound good I can finalize those and create a patch.

P.S. I think it would be important to eventually run Postgres tests on CI.



 Comments   
Comment by Juho Teperi [ 12/May/16 5:45 PM ]

This patch fixes problem 2.

I ddidn't include fix for 1. yet as I'm not sure what is the best solution for it.

Comment by Sean Corfield [ 12/May/16 6:53 PM ]

I'm adding a select-key function that maps results for all DBs – it's identity for most DBs and the :id function for PostgreSQL – and then calling this on all inserted results (or map ping it over the results). This removes the conditional (if (postgres? db) ...) all over the place – which was inserted as a patch on the original set of tests a long time ago. Now I have a consistent structure to use in future tests.

For 1. I'm just removing the transaction wrapper: DDL just shouldn't be transactional.

I'll take a look at 3. in more detail once I have the tests passing with select-key applied.

Comment by Sean Corfield [ 12/May/16 7:48 PM ]

The behavior in 3. is correct, but not very helpful.

Insert row-as-map (both insert! and insert-multi!) will insert each row separately and ask the JDBC driver to return the generated keys (PostgreSQL returns the whole row... very helpful!).

Insert row-as-col-vals (both insert! and insert-multi!) will perform a single insert operation using executeBatch and then call getUpdateCount which returns just a series of 1's – since the SQL batch operation updated one row for each item in the batch.

I've updated the docstrings in both insert! and insert-multi! to make this clearer.

That is done because it didn't seem possible to portably (across all supported databases) insert multiple rows with a single statement, containing all the values. I can revisit this if someone believes it is possible.

Comment by Sean Corfield [ 12/May/16 7:49 PM ]

Fixed in Git (aside from the docstring changes, this is purely a test code update).





[JDBC-67] Should the default result-set-fn be vec instead of doall? Created: 01/Aug/13  Updated: 01/Jun/16  Resolved: 01/Aug/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Gary Fredericks Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None


 Description   

Rationale: obviously we want eagerness already, so the only difference here is what data structure we use. A vector has the advantage of being chunked when processed as a seq, which is probably the normal usage?

Happy to patch if there are no objections.



 Comments   
Comment by Sean Corfield [ 01/Aug/13 11:21 AM ]

Most code out there is going to treat the result as a seq so I'd be concerned about potential overhead in changing from essentially `(seq (doall (map ...)))` to `(seq (vec (map ...)))`. Can you persuade me that's not a real issue?

Comment by Gary Fredericks [ 01/Aug/13 11:29 AM ]

It occurred to me that maybe a more efficient version of this would be the ability to (maybe not by default) construct a chunked-seq directly (in the lower-level manner) in the result-set-seq function. Would this address your objections?

Comment by Gary Fredericks [ 01/Aug/13 11:30 AM ]

I think the only reason to avoid that by default is the danger (is it really a problem?) of reading more records than the user needs. But it might be that the benefits of chunked seqs outweigh that risk, and just providing an option to unchunk would be sufficient.

Comment by Sean Corfield [ 01/Aug/13 12:03 PM ]

That sounds like over-complication to me. Is it solving a real problem that exists today with the `doall` approach?

Comment by Sean Corfield [ 01/Aug/13 12:06 PM ]

And it also occurred to me that if someone actually wants a vector result, they can just specify `:result-set-fn vec` directly. So it's easy to get the behavior you want without changing the behavior for all that existing code out there (I know, not much code relies on the new API yet, but I have it in production at World Singles so I don't want it to silently change under me at this point - we'd have to do a lot of testing to ensure it didn't adversely affect performance and/or stability).

Comment by Sean Corfield [ 01/Aug/13 12:06 PM ]

If you want a vec, specify it for :result-set-fn. At this point I don't want to change the default when there is existing production code depending on the old behavior.

Comment by Gary Fredericks [ 01/Aug/13 12:31 PM ]

I don't want a vector in particular, I want a chunked seq so that subsequent processing is faster. If we go with just the chunked-seq approach (in result-set-seq) I think there's a very small chance of any noticeable behavior change.

My hunch is that most uses of java.jdbc would see some speedup as a result of chunking.

Comment by Sean Corfield [ 01/Aug/13 12:42 PM ]

If you can provide a benchmark that proves that - and a separate patch under a separate ticket - I'd look at that more favorably.

Comment by Gary Fredericks [ 01/Aug/13 12:45 PM ]

Sounds good, thanks.





[JDBC-7] Clojure 1.2 compatibility Created: 06/Jun/11  Updated: 01/Jun/16  Resolved: 08/Jun/11

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Blocker
Reporter: Sean Corfield Assignee: Aaron Bedra
Resolution: Completed Votes: 0
Labels: None


 Description   

Per Aaron Bedra: "I just tried to use this on a 1.2 project that we have been working on for a while now and the dynamic defs are killing it. Can I change it to use the older ^{:dynamic true} syntax and release a new version? Is that the only thing that is keeping this off of 1.2? For now we need to make sure the new contrib libraries are 1.2 compatible, so people can start transitioning now."






[JDBC-34] get-connection's URI conventions incompatible with PostgreSQL Created: 12/Jun/12  Updated: 01/Jun/16  Resolved: 18/Jun/12

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Chas Emerick Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

get-connection requires that provided URIs be structured like so:

dbtype://user:password@host:port/database

This is often sufficient, but many PostgreSQL URIs require the use of URI parameters to further configure connections. For example, postgresql.heroku.com provides JDBC URIs like this:

jdbc:postgresql://ec2-22-11-231-117.compute-1.amazonaws.com:5432/d1kuttup5cbafl6?user=pcgoxvmssqabye&password=PFZXtxaLFhIX-nCA0Vi4UbJ6lH&ssl=true

...which, when used outside of Heroku's network, require a further sslfactory=org.postgresql.ssl.NonValidatingFactory parameter.

The PostgreSQL JDBC driver supports a number of different URI parameters, and recommends putting credentials into parameters rather than using the user:password@ convention. Peeking over at the Oracle thin JDBC driver's docs, it appears that it expects credentials using its own idiosyncratic convention, user/password@.

<opinion>
This all leads me to think that get-connection should pass URIs along to DriverManager without modification, and leave URI format conventions up to the drivers involved. For now, my workaround is to do essentially that, using a map like this as input to with-connection et al.:

{:factory #(DriverManager/getConnection (:url %))
 :url "jdbc:postgresql://ec2-22-11-231-117.compute-1.amazonaws.com:5432/d1kuttup5cbafl6?user=pcgoxvmssqabye&password=PFZXtxaLFhIX-nCA0Vi4UbJ6lH&ssl=true"}

That certainly works, but I presume that such a workaround won't occur to many users, despite the docs/source.
</opinion>

I don't think I've used java.jdbc enough (or RDMBS' enough of late) to comfortably provide a patch (or feel particularly confident in the suggestion above). Hopefully the report is helpful in any case.



 Comments   
Comment by Sean Corfield [ 14/Jun/12 1:36 AM ]

How about an option that takes a map like:

{:connection-uri "jdbc:postgresql://ec2-22-11-231-117.compute-1.amazonaws.com:5432/d1kuttup5cbafl6?user=pcgoxvmssqabye&password=PFZXtxaLFhIX-nCA0Vi4UbJ6lH&ssl=true"}

Essentially as a shorthand for the workaround you've come up with?

Comment by Sean Corfield [ 15/Jun/12 10:20 PM ]

Try 0.2.3-SNAPSHOT which has support for :connection-uri and let me know if that is a reasonable solution for you?

Comment by Chas Emerick [ 18/Jun/12 3:35 PM ]

Yup, 0.2.3-SNAPSHOT's :connection-uri works fine. I've since moved on to using a pooled datasource, but this will hopefully be a more obvious path to newcomers than having to learn about :factory and DriverManager.

Comment by Sean Corfield [ 18/Jun/12 3:52 PM ]

Resolved by adding :connection-uri option.

Comment by Carlos Cunha [ 28/Jul/12 8:09 PM ]

accessing an heroku database outside heroku, "sslfactory=org.postgresql.ssl.NonValidatingFactory" doesn't work. i get "ERROR: syntax error at or near "user" Position: 13 - (class org.postgresql.util.PSQLException". this happens whether adding it to :subname or :connection-uri Strings

another minor issue - why the documentation of "with-connection" (0.2.3) refers the following format for the connection string URI:
"subprotocol://user:password@host:post/subname
An optional prefix of jdbc: is allowed."
but the URI which can actually be parsed successfully is like the one above: jdbc:postgresql://ec2-22-11-231-117.compute-1.amazonaws.com:5432/d1kuttup5cbafl6?user=pcgoxvmssqabye&password=PFZXtxaLFhIX-nCA0Vi4UbJ6lH&ssl=true
"subprotocol://user:password@host:post/subname" (format like the DATABASE environment variables on heroku) will not be parsed correctly. why the format for the URI that is used on heroku is not supported by the parser?

maybe i'm doing something wrong here

thanks in advance

Comment by Sean Corfield [ 29/Jul/12 4:57 PM ]

Carlos, the :connection-uri passes the string directly to the driver with no parsing. The exception you're seeing is coming from inside the PostgreSQL driver so you'll have to consult the documentation for the driver.

The three "URI" styles accepted by java.jdbc are:

  • :connection-uri - passed directly to the driver with no parsing or other logic in java.jdbc,
  • :uri - a pre-parsed Java URI object,
  • a string literal - any optional "jdbc:" prefix is ignored, then the string is parsed by logic in java.jdbc, based on the pattern shown (subprotocol://user:password@host:port/subname).

If you're using :connection-uri (which is used on its own), you're dealing with the JDBC driver directly.

If you're using :uri or a bare string literal, you're dealing with java.jdbc's parsing (implemented by Phil Hagelberg - of Heroku).

Hope that clarifies?

Comment by Carlos Cunha [ 29/Jul/12 8:36 PM ]

Sean, thank you for such comprehensive explanation.

Still, it didn't work with any of the options. I used before a postgres JDBC driver to export to the same database (in an SQL modeller - SQLEditor for the MAC) and it worked (though it would connect some times, but others not). The connection String used was like "jdbc:postgresql://host:port/database?user=xxx&password=yyy&ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory". The driver name was "org.postgresql.Driver" (JDBC4). Anyway, time to give up. I will just use a local database.

Thank you!

Comment by Carlos Cunha [ 31/Jul/12 7:20 PM ]

Sean, JDBC combinations were working after. i was neglecting an insert operation in a table with a reserved sql keyword "user", so i was getting a "ERROR: syntax error at or near "user" Position: 13", and therefore the connection was already established at the time.

i'm sorry for all the trouble answering the question (_

thank you

Comment by Sean Corfield [ 31/Jul/12 7:46 PM ]

Glad you got to the bottom of it and confirmed that it wasn't a problem in java.jdbc!





[JDBC-44] Add db-do-commands to match new API Created: 26/Dec/12  Updated: 01/Jun/16  Resolved: 26/Dec/12

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

New version of do-commands but with db explicitly passed in and transaction? argument - this will satisfy JDBC-38.



 Comments   
Comment by Sean Corfield [ 26/Dec/12 4:10 PM ]

db-do-commands added





[JDBC-46] A mechanism for reading specialized types from result sets could be useful Created: 23/Jan/13  Updated: 01/Jun/16  Resolved: 19/Apr/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Gary Fredericks Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File JDBC-46-p1.patch     Text File JDBC-46-p2.patch     Text File JDBC-46-p3.patch    

 Description   

I had this idea when trying to adapt Korma to use with postgres tables with columns of array types. I had some success using Korma's transform functions to convert the java.sql.Array objects into a seq on the way out, but recent changes to Korma seem to indicate that the transform functions are not guaranteed to run before the connection is closed. This interferes with efforts to read the arrays, which I think can be their own result sets.

Presumably korma could be changed to support reading the arrays on the way out, but maybe being able to customize the read behavior within c.j.jdbc would be simpler.

The first idea I've had toward this end is to create a protocol with a single function that objects being read out of the resultset get passed through. It would default to the identity function, but users could e.g. extend it to java.sql.Array. I don't know if there are significant performance implications for an extra function call for each value coming out of the database.

I'll have a patch in this vein attached shortly.



 Comments   
Comment by Gary Fredericks [ 23/Jan/13 10:26 AM ]

Attached patch. Is there any straightforward way to test this? Or a place it should be documented?

Comment by Sean Corfield [ 23/Jan/13 10:48 AM ]

Docs: just needs good docstring since the clojure.github.com site is auto-generated from those (although it is still triggered manually it seems).

Test: mvn test

I have Derby, HSQLDB, MySQL, MS SQL Server (two drivers), MySQL. I don't have Postgres to test against. By default mvn test only tries Derby, HSQLDB and SQLite. TEST_DBS=... mvn test will test more.

Comment by Sean Corfield [ 23/Jan/13 10:49 AM ]

Would be nice to have a test with a custom conversion, mapping any String to some constant perhaps? Perhaps a separate test ns that is required and tested from test_jdbc.clj?

Comment by Gary Fredericks [ 23/Jan/13 2:38 PM ]

Oh, we have to be careful with testing because I believe any extensions to the protocol are going to be effective for all the remaining tests, right?

Comment by Gary Fredericks [ 23/Jan/13 3:53 PM ]

Attached new patch with docstrings and a test. I've only run the tests with MySQL. Let me know if there's anything else I can do.

Comment by Gary Fredericks [ 29/Jan/13 4:37 PM ]

This patch recently came in handy in an different situation, when jdbc was returning Integers instead of Longs, we had a really clean spot to do the conversion.

Comment by Sean Corfield [ 07/Apr/13 3:44 AM ]

There have been a lot of code changes since you submitted the patch and right now the patch doesn't apply. Can you re-submit the patch against the latest master branch so I can try again? Thanx!

Comment by Gary Fredericks [ 11/Apr/13 9:32 PM ]

I'm getting 3 test failures on master currently (with a basic mvn test), and after applying this new patch (p3) I still get 3 failures. So hopefully that means I didn't break anything.

Comment by Sean Corfield [ 19/Apr/13 9:34 PM ]

Applied. Will be in 0.3.0-alpha2

Comment by Gary Fredericks [ 15/May/13 9:52 AM ]

I'm thinking this feature is no longer necessary due to the :row-fn option – since the only motivation in the first place was to have the opportunity to transform a result before the connection closes.

There would still be no easy way to do it from Korma, but that's Korma's fault. So maybe we should remove this before 0.3.0 is released?

Comment by Sean Corfield [ 21/May/13 7:19 PM ]

This turns out to be a very useful hook for certain database types. I used it to solve an issue with Oracle, to enable users to automatically convert Oracle-specific datatypes to regular values, in JDBC-57. So, thank you... even if you think it isn't necessary now!

Comment by Gary Fredericks [ 21/May/13 7:53 PM ]

That use case could be served by :row-fn, right? You just like that it's easier? I don't mind it if others still find it valuable, I just wanted to make sure we didn't accidentally cement redundant features.

Comment by Sean Corfield [ 21/May/13 9:40 PM ]

:row-fn is intended for whole-row transformations - it is lazily mapped over the result set - so it's really meant for post-processing your data in a general sense while the connection is open. The protocol-based column reader is intended to transform specific types of data, independent of any application semantics (e.g., Oracle returns an oracle.sql.TIMESTAMP which can be automatically transformed into a java.sql.Timestamp or Date or whatever).

And :result-set-fn is intended for whole-result-set transformation, just prior to returning the result set.

Comment by Gary Fredericks [ 26/May/13 4:27 PM ]

Sounds good to me.





[JDBC-52] Allow execute! to do batch updates via :multi? optional argument Created: 19/Apr/13  Updated: 01/Jun/16  Resolved: 19/Apr/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

See https://groups.google.com/d/msg/clojure-java-jdbc/H3spAaWgkZI/9WJPXJsHiVsJ for background and justification.

execute! would take an optional :multi? argument, default false, and execute-helper would either call db-do-prepared directly (as it does now) or apply it, assuming (rest sql-params) is a sequence of vectors, which would become multiple params-groups.

I think this would also allow insert! to be simplified and implemented in terms of execute! which would be a nice symmetry.



 Comments   
Comment by Sean Corfield [ 19/Apr/13 9:18 PM ]

Implemented. Will be in alpha2.





[JDBC-59] Support Postgres in the test suite Created: 25/May/13  Updated: 01/Jun/16  Resolved: 26/May/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Dan Dillinger Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: patch, test

Attachments: File postgres-test-support.diff    

 Description   

The README for c.j.j currently indicates postgres support with instructions on how to run the tests against it; however, the tests fail if you do. There are two reasons:

  • float-or-double needs a postgres case or it fails because of equality classes
  • the postgresql jdbc driver's .getGeneratedKeys() implementation returns all columns in the row, not just the auto-generated keys.

A patch is attached: postgres-test-support.diff (05/25/2013)



 Comments   
Comment by Sean Corfield [ 26/May/13 10:19 AM ]

Applied patch. Thanx!





[JDBC-77] Add ISQLValue protocol for conversion of Clojure values to SQL values Created: 02/Dec/13  Updated: 01/Jun/16  Resolved: 02/Dec/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

This should help people with Postgres or other exotic custom types in SQL.

Will need to add mapping of the protocol's function across sets of values.



 Comments   
Comment by Sean Corfield [ 02/Dec/13 1:06 AM ]

ISQLValue protocol added with default Object/nil implementations.

Comment by Wei Hsu [ 07/Dec/13 3:15 PM ]

Turns out that the :set-parameters integration point was enough for my needs at the moment. For now, I'm going to prioritize shipping my project over working on this protocol

Comment by Sean Corfield [ 07/Dec/13 4:10 PM ]

Cool. Glad that integration point works for you! It's not as elegant but it provides more flexibility because of the arguments it provides (the whole set).





[JDBC-81] Deprecate db-transaction and add with-db-transaction for better consistency Created: 12/Dec/13  Updated: 01/Jun/16  Resolved: 12/Dec/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Comments   
Comment by Sean Corfield [ 12/Dec/13 8:11 PM ]

Added with-db-transaction. Deprecated db-transaction (it prints a deprecation warning). Updated code and tests to use with-db-transaction instead.





[JDBC-87] Laziness on metadata-result results in closed resultset under Microsoft SQL Created: 06/Jan/14  Updated: 01/Jun/16  Resolved: 30/Jan/14

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Niels van Klaveren Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Getting a collection of tables, and then using these to get the columns inside the tables work swimmingly under Oracle, but throws an exception under Microsoft SQL (driver 4.0.2206.100)

CompilerException com.microsoft.sqlserver.jdbc.SQLServerException: The result set is closed., compiling:(form-init602030969958815921.clj:2:3)

 (def tables
    (jdbc/with-db-metadata [m db]
                           (->>
                             (jdbc/metadata-result  (.getTables m nil nil nil (into-array String ["TABLE"])))
                             (map :table_name)
                             doall)))

  (def columns
    (jdbc/with-db-metadata [m db]
                      (->>
                        tables
                        (mapcat #(jdbc/metadata-result (.getColumns m nil nil % nil)))
                        doall)))

The problem lies in the mapcat part. Using only map works fine, but there's no way to concat before the doall, since that results in the same exception.

Changing metadata-result from

(if (instance? java.sql.ResultSet rs-or-value)
    (result-set-seq rs-or-value :identifiers identifiers :as-arrays? as-arrays?)
    rs-or-value)

to

(if (instance? java.sql.ResultSet rs-or-value)
    (doall (result-set-seq rs-or-value :identifiers identifiers :as-arrays? as-arrays?))
   rs-or-value)

fixes things, but I'm not entirely sure of the consequences that would entail in case laziness would be intended.



 Comments   
Comment by Niels van Klaveren [ 06/Jan/14 7:23 AM ]

Giving metadata-result a result-set-fn parameter defaulting to either identity or doall would work, I guess.

(defn metadata-result
  "If the argument is a java.sql.ResultSet, turn it into a result-set-seq,
   else return it as-is. This makes working with metadata easier.
   Also accepts :identifiers and :as-arrays? to control how the ResultSet
   is transformed and returned. See query for more details."
  [rs-or-value & {:keys [identifiers as-arrays? result-set-fn]
                  :or {identifiers str/lower-case result-set-fn doall}}]
  (if (instance? java.sql.ResultSet rs-or-value)
    (result-set-fn (result-set-seq rs-or-value :identifiers identifiers :as-arrays? as-arrays?))
    rs-or-value))
Comment by Sean Corfield [ 06/Jan/14 11:15 AM ]

Yes, I think that is probably the right solution. I'll review it in the context of query etc and make sure it would create a consistent API. It might be worthwhile having :row-fn as well but I need to compare the functions and think carefully about the impact of those changes.

Thanx for spotting the problem!

Comment by Sean Corfield [ 30/Jan/14 1:34 AM ]

Add :row-fn and :result-set-fn to metadata-result





[JDBC-90] Add two subprotocol - classname mappings for oracle support Created: 17/Feb/14  Updated: 01/Jun/16  Resolved: 28/Feb/14

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Niels van Klaveren Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Currently, Oracle subprotocols and classnames are missing. The Oracle jdbc driver supports two subprotocols, "oracle:thin" and "oracle:oci" (for thin client and oracle client support). Both need to be mapped to the class "oracle.jdbc.OracleDriver".



 Comments   
Comment by Sean Corfield [ 28/Feb/14 1:39 AM ]

Will be in 0.3.4





[JDBC-92] Provide more intuitive db-spec format Created: 07/Mar/14  Updated: 01/Jun/16  Resolved: 28/Apr/14

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

All this :subname / :subprotocol stuff is fine but it would be nice to just be able to say:

{:dbtype "mysql" :dbname "foo"}

and have the :subname be constructed from defaults, optionally allowing :host / :port to override 127.0.0.1 and (for mysql, 3306; for mssql/jtds 1433).



 Comments   
Comment by Glen Mailer [ 27/Apr/14 4:29 AM ]

It would also be helpful to support use of a URI for config, as used in systems like heroku.

This link seems to be the best example I can find:

http://turbogears.org/1.0/docs/DbUri.html

Comment by Sean Corfield [ 28/Apr/14 9:16 PM ]

Glen, URIs are already supported. You can see examples in the test suite: https://github.com/clojure/java.jdbc/blob/master/src/test/clojure/clojure/java/test_jdbc.clj#L90-L98

Comment by Sean Corfield [ 28/Apr/14 9:18 PM ]

See https://github.com/clojure/java.jdbc/commit/4667bdd86e73870cbb1e369c65fe8262fa964637

Will be in 0.3.4 release.





[JDBC-125] Allow more functions to accept options for PreparedStatement Created: 07/May/16  Updated: 01/Jun/16  Resolved: 11/May/16

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Now that functions accept a single options map, that could be passed between calls easily enough, which would allow options for prepare-statement to be passed to query and other functions.



 Comments   
Comment by Sean Corfield [ 07/May/16 8:26 PM ]

Note that query currently does have a way to pass options to the prepare-statement call but it's not obvious (and I'd forgotten until I looked at the code): the [sql & params] vector can actually be [opts sql & params] and that opts is passed to prepare-statement. That is really no longer necessary since we could pass query's opts through db-query-with-resultset - a function that does not yet take options!

Comment by Sean Corfield [ 11/May/16 3:19 PM ]

Fixed in 0.6.0 RC 2





[JDBC-128] Insert! without transaction doesn't work on Postgres Created: 12/May/16  Updated: 01/Jun/16  Resolved: 12/May/16

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Juho Teperi Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None
Environment:

Postgres 9.4.7


Attachments: Text File jdbc-128-1.patch    

 Description   

When insert! is run outside of transaction it is not commited when running on Postgres.
Insert! should by default automatically run the commands in transactions.

This problem is probably not seen in current tests as default test databases automatically commit single commands.

I'm yet to reproduce this using unit tests due to JDBC-127, but I can reproduce this in a application.

In 0.5.5 (https://github.com/clojure/java.jdbc/commit/310781429225b614cab01256f3adce54917644aa) extract-transaction? was removed and default :transaction? value was moved to insert!. Later, in 0.5.6 (https://github.com/clojure/java.jdbc/commit/1928ff4d627243c9d2e0bf309a8b4f965f5034d3) insert! was rewriten. :transaction? check was moved to insert-cols! and insert-rows! but the default value was lost.

I attached a patch that adds the default value to insert-cols! and insert-rows. But I think it would be a good idea to wait until test issues (JDBC-127) have been fixed before merging this, so that this problem and solution can be validated using tests.



 Comments   
Comment by Sean Corfield [ 12/May/16 7:52 PM ]

Good catch on this!

I have not been able to reproduce this in a test case yet but I can see by inspection that you're right about the loss of transaction by default. I'm surprised I haven't run into this in all my other testing (on MySQL).

Comment by Sean Corfield [ 12/May/16 8:10 PM ]

Fixed in 0.6.1 which should be on Maven Central tonight!





[JDBC-41] How to preserve order of rows? Created: 24/Dec/12  Updated: 01/Jun/16  Resolved: 07/Apr/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Konstantin Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

I'd like to get the ordered record set within with-query-results
(or the way to restore the original order).
Is this possible with clojure/java.jdbc?



 Comments   
Comment by Sean Corfield [ 24/Dec/12 12:49 PM ]

I assume you mean the order of columns within the rows, not the actual order of rows within the result set?

I'm looking at providing an alternative result set format using arrays instead of maps which will preserve column orders.

Since maps are unordered by default in Clojure, I'd be suspicious of code that relies on ordering of keys (so I'm reducing this to 'minor').

Comment by Konstantin [ 24/Dec/12 10:54 PM ]

You are right, I mean the order of columns, not the rows.

I understand, that maps are unordered. There are few reasons when
the order is matters. The one of them is my task - simple
SQL database client.

Probably it would be better to do this possibility optional,
since it is not used very often?

Comment by Sean Corfield [ 25/Dec/12 8:26 PM ]

An optional keyword argument sounds like a reasonable compromise. I'm already looking at keyword arguments on resultset-seq for returning arrays instead of maps and returning an order-preserving map should be as simple as replacing (zipmap keys (row-values)) with (apply array-map (interleave keys (row-values)))

Comment by Sean Corfield [ 07/Apr/13 3:38 AM ]

Fixed in 0.3.0-SNAPSHOT by allowing :as-arrays argument to query and result-set-seq functions, which returns result as vector of column names, followed by vectors for each row.





[JDBC-73] Add ddl/create-primary-key and ddl/create-foreign-key Created: 05/Nov/13  Updated: 01/Jun/16  Resolved: 21/Nov/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Matt Oquist Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None

Attachments: Text File java-jdbc-keys.patch    
Patch: Code and Test

 Description   

I've attached a patch to add two new DDL fns to create primary keys and foreign keys. What do you think?



 Comments   
Comment by Sean Corfield [ 21/Nov/13 11:08 PM ]

Given recent discussions about the direction of java.jdbc and the DSLs that crept into 0.3.0, I'm going to close this as "won't fix" and direct efforts toward adding DDL support to HoneySQL (and we talked a bit about this at Clojure/conj).

Comment by Sean Corfield [ 21/Nov/13 11:10 PM ]

Won't fix - DSLs are not growing here...





[JDBC-25] The classnames private var should be public Created: 07/Feb/12  Updated: 01/Jun/16  Resolved: 12/Feb/12

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Trivial
Reporter: Nicolas Buduroi Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None


 Description   

I've got a bunch of integration/system tests in Lobos which are run over multiple target database. As I do not want to force all target drivers to be made as dev-dependencies, I check if the driver is available before running those tests. As long as the classname key is in the db-spec it's all fine, but now java.jdbc support db-spec with that key, it would be useful to have access to the classnames var for me, so it would be great to have it made public. This isn't a blocker as up till now I don't really need it, but I'm planning to add some tests for string based db-spec.



 Comments   
Comment by Sean Corfield [ 12/Feb/12 4:21 AM ]

Right now, that's in the internal namespace which is going away so I can't make it part of the API. You can always pull private vars out with @#'clojure.java.jdbc.internal/classnames but bear in mind that internal ns will merge with the parent ns soon!





[JDBC-51] Reflection warning removal Created: 07/Apr/13  Updated: 01/Jun/16  Resolved: 19/Apr/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Critical
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Before releasing 0.3.0, make a pass over the code and remove reflection where reasonable.



 Comments   
Comment by Sean Corfield [ 19/Apr/13 9:52 PM ]

Declared get-connection to return Connection in 0.3.0-alpha2.





[JDBC-9] Solve duplicate column label issue Created: 10/Jun/11  Updated: 01/Jun/16  Resolved: 11/Oct/11

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Per Peter Siewart (and others):

– While working with clojure.java.jdbc, I kept running into the "ResultSet must have unique column labels" exception, so I rewrote it to automatically rename duplicate columns. Now, I can run "SELECT * FROM tbl1 JOIN tbl2 ON tbl1.id = tbl2.id" without causing any errors.

This would definitely be a nice enhancement but so far no one with a signed CA has offered a patch

Figuring out the appropriate renaming strategy is key - a sensible default with an optional function to allow users to specify how to do this? Or perhaps only renaming if users specify such a function?



 Comments   
Comment by Sean Corfield [ 11/Oct/11 4:12 PM ]

Applied a variant of Peter's patch and added unit tests.





[JDBC-11] Transaction not rolled back on Throwable exception Created: 03/Jul/11  Updated: 01/Jun/16  Resolved: 17/Jul/11

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 1
Labels: None

Attachments: Text File 0001-Don-t-commit-transaction-in-case-of-exceptions.patch     Text File test-mechanism.patch    

 Description   

(transaction
(insert-values .....)
(throw (Throwable. "baaaaad")))

Reported by Sebastián Galkin on github.

Transactions rollback on Exception but not Throwable.



 Comments   
Comment by Sebastián Bernardo Galkin [ 03/Jul/11 11:07 PM ]

I think the correct behavior would be to call rollback on the transaction and not just not calling commit? The user will expect at the end of the transaction block one of commit or rollback to be called.

I was thinking about how to test this. How do you feel about a mock Connection object, where we can check if commit and/or rollback have been called as part of the test?

The same mechanism could be used to test other features.

Comment by Sebastián Bernardo Galkin [ 04/Jul/11 12:23 AM ]

I was thinking in something like the attached for testing the issue (and other functionality).
It needs several improvements, but it's enough to get the idea.
What do you think?

Comment by Sean Corfield [ 04/Jul/11 12:27 AM ]

FYI, here's the IRC discussion about this, for the record:

[5:11pm] <paraseba>
In clojure.java.jdbc there is a transaction function, which is supposed to rollback if the passed body throws an exception, but it only does that in case of an Exception, not any Throwable. What's the rationale behind this?
[5:11pm] <paraseba>
I'd want my transaction rolledback in case of any errors
[5:12pm] <paraseba>
seancorfield: maybe you can explain me this?
[5:53pm] <amalloy>
paraseba: catching Errors is generally a bad practice. i'm not saying it's wrong all the time, but Errors are often unrecoverable anyway
[5:53pm] <amalloy>
eg, "You ran out of heap space! I can't even allocate memory to throw an exception!"
[5:54pm] <paraseba>
but, even in the worst conditions, shouldn't we try to rollback the transaction? is not better that commiting in this unexpected error condition?
[5:55pm] <paraseba>
we can then rethrow the Throwable, after trying to rollback
[5:55pm] <lucian>
paraseba: i don't think the db will commit if it gives you an error
[5:55pm] <lucian>
and as amalloy, errors like OOM are best solved with exit()
[5:55pm] <paraseba>
lucian: it will ... jdbc is catching Exception and rolling back in that case .... but it commits in a finally
[5:56pm] <paraseba>
so, if you have an Error thrown, it will commit
[5:56pm] <paraseba>
I guess that's more surprising than a rollback
[5:57pm] <paraseba>
the logic is .... (try do-your-thing (catch Exception roll-back) (finally commit))
[5:57pm] <lucian>
paraseba: well, then maybe you don't want to commit in finally?
[5:57pm] <paraseba>
I don't, not if I got an Error
[5:58pm] <amalloy>
lucian: i think he's upset that a library he's using is committing on error
[5:59pm] <paraseba>
amalloy: I can solve it easily by wrapping my operations in a catch Throwable -> rollback -> rethrow, but I think it's not the right behavior for the library
[5:59pm] <paraseba>
I would expect a commit only if the block ended without any kind of exceptions or errors. don't you agree ?
[6:01pm] <amalloy>
paraseba: meh. all kinds of weird stuff can happen if an Error occurs; i wouldn't be entirely certain that an attempt to "recover" makes things worse due to some weird program state caused by the Error. i mean, my completely-unresearched opinion is that catching Throwable would be better, but you can't really rely for your program's correctness on anything that happens after an Error
[6:02pm] <paraseba>
but, we are forcing a commit after an Error
[6:04pm] <paraseba>
the usual logic should be .... (do do-your-thing (commit)) if do-your-thing throws anything ... no commit is done. Puting a commit in a finally enforces the commit, even after Errors
[6:06pm] <amalloy>
yeah, i think i agree
[6:08pm] <paraseba>
amalloy: ok, I'll report an issue, thanks

Comment by Sebastián Bernardo Galkin [ 09/Jul/11 2:36 PM ]

It's important to realize that there are 2 problems here.

a.- Transaction not rolled back on Throwable
b.- The original Exception is "swallowed" and a new, different one is thrown

I think both are equally serious.

Comment by Sean Corfield [ 09/Jul/11 3:57 PM ]

I agree Sebastian. I've raised this issue for input from clojure-dev (looking for stylistic guidelines for contrib libraries) because I think both problems need to be fixed.

My current leaning is toward supporting three paths:

  • success: rollback transaction if user code sets rollback true, else commit
  • Exception: rollback transaction and rethrow (without wrapping)
  • Throwable: allow it to escape, take no action
Comment by Sebastián Bernardo Galkin [ 12/Jul/11 11:05 AM ]

I'm attaching a patch that fixes the problem without adding any tests.

Comment by Sean Corfield [ 17/Jul/11 3:28 AM ]

On success, commit unless rollback has been set (in which case rollback).
On Exception, rollback and rethrow as-is.
On Error (Throwable), allow it to escape.
In all cases, set rollback false and restore auto-commit setting.

The only two changes here should be:

  • Exception is no longer wrapped when thrown (may break code that expected the wrapping?)
  • No commit attempted on Error (Throwable)




[JDBC-32] with-connection does not accept standard jdbc URIs. Created: 18/May/12  Updated: 01/Jun/16  Resolved: 10/Jun/12

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Tyler Jennings Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None
Environment:

clojure 1.4.0
clojure.java.jdbc 0.2.1



 Description   

with-connect and other functions accepting a db-spec do not support standard jdbc uris, like this one:

jdbc:mysql://localhost:3306/<database>

In my environment passing that uri to yields an exception:

Exception in thread "main" java.lang.NullPointerException
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:247)
at clojure.lang.RT.loadClassForName(RT.java:2050)
at clojure.java.jdbc$get_connection.invoke(jdbc.clj:175)
at clojure.java.jdbc$get_connection.invoke(jdbc.clj:166)
at clojure.java.jdbc$get_connection.invoke(jdbc.clj:168)
at clojure.java.jdbc$with_connection_STAR_.invoke(jdbc.clj:299)
at bungle_o.migration$create_exceptions.invoke(migration.clj:17)
at bungle_o.migration$_main.invoke(migration.clj:21)
at clojure.lang.Var.invoke(Var.java:411)
at user$eval191.invoke(NO_SOURCE_FILE:1)
at clojure.lang.Compiler.eval(Compiler.java:6511)
at clojure.lang.Compiler.eval(Compiler.java:6501)
at clojure.lang.Compiler.eval(Compiler.java:6477)
at clojure.core$eval.invoke(core.clj:2797)
at clojure.main$eval_opt.invoke(main.clj:297)
at clojure.main$initialize.invoke(main.clj:316)
at clojure.main$null_opt.invoke(main.clj:349)
at clojure.main$main.doInvoke(main.clj:427)
at clojure.lang.RestFn.invoke(RestFn.java:421)
at clojure.lang.Var.invoke(Var.java:419)
at clojure.lang.AFn.applyToHelper(AFn.java:163)
at clojure.lang.Var.applyTo(Var.java:532)
at clojure.main.main(main.java:37)

This URI works fine, however:

mysql://localhost:3306/<database>

Is this intentional or a bug?

Here's the mysql documentation for reference: http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-configuration-properties.html



 Comments   
Comment by Sean Corfield [ 10/Jun/12 5:48 PM ]

It was a bug. Thanks for spotting that.





[JDBC-42] insert! always uses a transaction Created: 26/Dec/12  Updated: 01/Jun/16  Resolved: 06/Apr/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

execute! delete! and update! all accept an optional :transaction? argument that allows you to control whether the operation occurs in a transaction or not.



 Comments   
Comment by Sean Corfield [ 06/Apr/13 6:53 PM ]

Fixed in 0.3.0-SNAPSHOT - :transaction? argument is now supported on insert!





[JDBC-45] insert! (and insert) need to support insert of full rows with no column names Created: 27/Dec/12  Updated: 01/Jun/16  Resolved: 06/Apr/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

In order to fully deprecate insert-values (and to deprecate insert-rows at all), the new insert logic needs to support lack of column names.



 Comments   
Comment by Sean Corfield [ 06/Apr/13 4:38 PM ]

Fixed in 0.3.0-SNAPSHOT and insert-values / insert-rows are fully deprecated.





[JDBC-57] Better support for Oracle Created: 18/May/13  Updated: 01/Jun/16  Resolved: 20/May/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Manish Handa Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None
Environment:

Oracle 10g and above


Attachments: Text File datatype.patch     Text File oracle-fix.patch    

 Description   

This is just a suggestion, and i am not sure if it is the right way to do this.

When fetching data from Oracle the oracle jdbc driver returns all the numeric type columns as BigDecimal and date/datetime columns as oracle.sql.TIMESTAMP. This causes a lot of headache as we have to remember to convert the data to the correct type.

Although, this can be handled at the application level by using a :row-fn, it would be really nice if this library handles the conversion itself.



 Comments   
Comment by Manish Handa [ 18/May/13 2:10 PM ]

Patch for handling results returned by oracle jdbc thin driver

Comment by Sean Corfield [ 19/May/13 1:44 PM ]

I think the right solution here will be to modify the result-set-read-column protocol function to be passed rsmeta and i and then Oracle users can extend the protocol to those types and perform the conversions they want. I don't believe automcatically converting BigDecimal is going to be the right choice for all users although doing the conversion for oracle.sql.TIMESTAMP is a more arguable conversion.

Comment by Manish Handa [ 20/May/13 9:57 AM ]

Agreed. I thought of this, but it requires changing the protocol to take rsmeta along with the column index. i have uploaded another patch for this change.

Also, it'll be better to leave the handling of oracle.sql.Timestamp to the application as it needs a dependency on the oracle thin driver (which we should certainly not add to java.jdbc).

So, after this updated protocol, the oracle users can extend it with something like this (much better now):

;;; Oracle users can define this in their applications
;; to extend it to BigDecimal and other oracle types
(extend-protocol IResultSetReadColumn
oracle.sql.TIMESTAMP
(result-set-read-column [x _ _] (.timestampValue x))

java.math.BigDecimal
(result-set-read-column [val rsmeta idx]
(if-not (= (.getScale rsmeta idx) 0)
(.doubleValue val) ;;; reals
(if (= (.getPrecision rsmeta idx) 1)
(= 1 (.longValue val)) ;;; booleans
(.longValue val))))) ;; integrals

Comment by Sean Corfield [ 20/May/13 10:45 AM ]

Thanx. Yes, that was where I was going with this, and while we're in alpha for 0.3.0 I'm not averse to changing the signatures of new stuff introduced in the API rewrite. I'll review the patch later and probably roll it in this week (I just got back from a web tech conference in MN and I'm busy catching up on work!).

Comment by Sean Corfield [ 20/May/13 10:52 PM ]

Patch applied. Tests updated to match. Will be in 0.3.0-alpha5.





[JDBC-61] Add left and right join to DSL (Patch attached) Created: 16/Jun/13  Updated: 01/Jun/16  Resolved: 15/Sep/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sebastian Hennebrueder Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None

Attachments: Text File 0001-add-left-and-right-join-to-DSL.patch    
Patch: Code and Test

 Comments   
Comment by Sean Corfield [ 15/Sep/13 4:07 PM ]

Rather than expand the deliberately minimal DSL for SQL generation, I'd rather point people to HoneySQL.

The only area I might consider expanding the DSL is for the where clause to support more than just key = value but even there I'd rather direct folks to HoneySQL.





[JDBC-74] Allow :user as alias for :username for consistency Created: 15/Nov/13  Updated: 01/Jun/16  Resolved: 02/Dec/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Only the :datasource format of connection uses :username - The URI format and any other that passes parameters thru uses :user instead.

Allow :user to be used with :datasource for consistency. Open question: should we deprecate :username at this point?



 Comments   
Comment by Sean Corfield [ 02/Dec/13 1:23 AM ]

Added :user as an alias for :username





[JDBC-98] Add execute-return-keys! Created: 30/Jun/14  Updated: 01/Jun/16  Resolved: 30/Jun/14

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Tejas Dinkar Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None
Environment:

Developed on OSX


Attachments: Text File 0001-Adding-execute-return-keys-which-allows-insertions-w.patch    
Patch: Code and Test

 Description   

Added a function execute-return-keys! which can execute a prepared statement and return the key inserted.

Though I only support a single insert, I'm returning a list of values, similar to the insert! function



 Comments   
Comment by Sean Corfield [ 30/Jun/14 11:32 AM ]

Given the availability of insert! which returns keys when it can, what use case does this satisfy?

Comment by Tejas Dinkar [ 30/Jun/14 11:39 AM ]

insert! creates a sql query every time it's called (from string).

I would like to reuse the prepared statement across multiple insert! calls.

I originally started making insert! accept a prepared statement, patching db-do-prepared-return-keys, but I thought the execute-return-keys! would be a better place for it.

Comment by Sean Corfield [ 30/Jun/14 11:54 AM ]

Why not just use db-do-prepared-return-keys for this?

Adding an arbitrary variant of execute! is not a good approach, in my opinion. If you need something lower level than insert! or execute! as they stand, all the pieces should be there already.

Comment by Tejas Dinkar [ 30/Jun/14 12:05 PM ]

I can resubmit the patch to only change db-do-prepared-return-keys.

In the current patch, I did patch db-do-prepared-return-keys, but I just created execute-return-keys! as an alternative front end for it.

Comment by Sean Corfield [ 30/Jun/14 12:26 PM ]

Ah, db-do-prepared-return-keys only supports one parameter group, so I can see why it won't work for you here.

So this is to address the one specific case where you want to create a single prepared statement and run it for multiple parameter groups... If that really isn't possible with the current API, a generic way to do so should be added. You can easily create a prepared statement that will return generated keys, using prepare-statement and query will accept a prepared statement instead of SQL, so this should just be a matter of ensuring execute! and/or db-do-commands can accept a prepared statement instead of SQL. That would be a more generic solution.

Comment by Tejas Dinkar [ 30/Jun/14 12:29 PM ]

yes, but sadly execute! only returns the number of rows updated, not the actual rows (ids), though it accepts a prepared statement.

So there is a need for some generic API that is similar to execute!, but returns the rows.

Comment by Sean Corfield [ 30/Jun/14 12:32 PM ]

Looking more closely at db-do-prepared - which execute! uses - it should accept a prepared statement instead of a SQL string and thus so should execute!.

Therefore, you should be able to do something like:

(execute! db-spec [(prepare-statement (get-connection db-spec) "insert ..." :return-keys true) params params])
Comment by Sean Corfield [ 30/Jun/14 12:51 PM ]

And I've just noticed that this functionality is new in 0.3.4 which has not yet been released! OK, so if I make a new release, you will have the functionality you need.

Comment by Tejas Dinkar [ 30/Jun/14 1:07 PM ]

(yes, I know about that patch to 0.3.4, I submitted that patch).

I believe the only way to get the generated keys, is via the .getGeneratedKeys function, which is used in [1]. It is only in the execution path for insert!, not execute!.

That said, it is possible to do the following (just tried it), so maybe this patch isn't as useful as I'd thought.

(let [prepared-statement (j/prepare-statement ... :return-keys true)]
  (j/execute! db prepared-statement)
  (j/result-set-seq (.getGeneratedKeys prepared-statement)))

[1] https://github.com/clojure/java.jdbc/blob/master/src/main/clojure/clojure/java/jdbc.clj#L684

Comment by Sean Corfield [ 30/Jun/14 1:23 PM ]

I've made the 0.3.4 release. The code you show looks like a reasonable compromise given my reluctance to extend the API in somewhat arbitrary ways

I'm going to close this.

Comment by Sean Corfield [ 30/Jun/14 1:25 PM ]

With the 0.3.4 release allowing execute! to accept a prepared statement and the reasonably clean way to execute a prepared statement and then get the generated keys, I don't think a change to the API is necessary.





[JDBC-102] Support (driver specific) constants for :result-type :concurrency and :cursors next to current keywords. Created: 28/Oct/14  Updated: 01/Jun/16  Resolved: 28/Oct/14

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Niels van Klaveren Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

The JDBC spec supports vendors to add their own specific constants for result set holdability, concurrency and cursor types. With clojure.java.jdbc, these are abstracted away to give easy acces through keywords that are mapped to specific constants. With only an option for keywords, other constants cannot be used.

When result-set-type, result-set-concurrency and result-set-holdability would be changed from map lookups to functions with case statements containing the keyword / constant pairs, but :else returning the parameter itself, we could use both the current behavior {:cursor :read-only} as well as using vendor specific parameters {:cursor SQLServerResultSet/TYPE_SS_SERVER_CURSOR_FORWARD_ONLY}.



 Comments   
Comment by Sean Corfield [ 28/Oct/14 5:01 PM ]

Very good point and easy to implement. Thanks Niels!

Comment by Sean Corfield [ 28/Oct/14 5:36 PM ]

Will be in 0.3.6.





[JDBC-117] Variadic keyword arguments don't compose -- provide alternative Created: 08/Apr/16  Updated: 01/Jun/16  Resolved: 09/Apr/16

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

The more we work with java.jdbc at World Singles in composition with other functions, the more painful the variadic keyword argument approach becomes, especially as the number of optional keyword arguments grows.

The API would be easier to work with if the optional keyword arguments could be passed as simple maps. This will be straightforward for some functions, by introducing multiple arities, but harder for others. The multiple arity approach should be considered to be a bridge only, as a step toward deprecating the current approach.

As an example, query has the following argument list – and a pretty horrendous arglists specification to try to explain it!

[db sql-params & {:keys [result-set-fn row-fn identifiers as-arrays?] ...}]

Instead, it could be:

[db sql-params]
[db sql-params {:keys [result-set-fn row-fn identifiers as-arrays?] ...}]

Then the :arglists metadata could be removed (it's non-standard) and the information about sql-params added to the docstring (where it really belongs).

For backward compatibility, an additional arity could be provided:

[db sql-params k v & kvs]

Alex Miller considers this "the worst of all worlds" but I view it purely as a compatibility bridge between the current API (0.5.0) and the cleaner, future API (0.6.0).



 Comments   
Comment by Sean Corfield [ 09/Apr/16 2:49 AM ]

The upcoming 0.5.5 release will add the more composable function signatures and deprecate the unrolled optional keyword argument forms (using them will print a DEPRECATED warning to stdout).

Most of the changes in the API are "obvious" in that where the unrolled optional keyword arguments are allowed, a single map of options is now supported (and preferred). Some of the docstrings have yet to catch up with this. That's a work-in-progress.

insert! was the odd function out since it is already variadic, accepting one or more hash maps representing rows to insert, or a vector of column names followed by one or more vectors of corresponding values to insert. Unrolled optional keyword arguments could then follow those maps / vectors. The approach I've taken here is to introduce an :options keyword to act as a delimiter between the rows/columns and the options (since an options map could be confused with row map to be inserted). This makes the single row insert with options form more verbose:

(insert! db :table {:col1 "val1" :col2 2} :entities (quoted \"))
;; becomes
(insert! db :table {:col1 "val1" :col2 2} :options {:entities (quoted \")})

With hindsight, there are several ways this (and a few other functions in java.jdbc) could have been designed better...

Comment by Sean Corfield [ 09/Apr/16 2:51 AM ]

Note to self: once 0.5.5 is released, I need to update all the examples in http://clojure-doc.org/articles/ecosystem/java_jdbc/using_sql.html

Comment by Sean Corfield [ 09/Apr/16 3:05 PM ]

Version 0.5.5 is available on Maven. The Clojure Guides have been updated to reflect the deprecation and introduction of new syntax. I'll create a new issue to remove the deprecated syntax when 0.6.0 is being prepared.

Comment by Sean Corfield [ 09/Apr/16 3:05 PM ]

Implemented in 0.5.5.





[JDBC-129] PostgreSQL support is broken Created: 12/May/16  Updated: 01/Jun/16  Resolved: 12/May/16

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Tests fail. Clean-up fails.



 Comments   
Comment by Sean Corfield [ 12/May/16 5:55 PM ]

See JDBC-127 and JDBC-128. Need to get a PostgreSQL test environment up and running somewhere.

Comment by Sean Corfield [ 12/May/16 8:11 PM ]

I have a Docker image containing PostgreSQL so it will be part of normal testing protocol in future.





[JDBC-14] delete-rows does not work with null WHERE Created: 06/Aug/11  Updated: 01/Jun/16  Resolved: 04/Sep/11

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Minor
Reporter: Allen Rohner Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None


 Description   

(delete-rows "foo" [])
generates the SQL
delete from foo where null
, which deletes 0 rows. Instead, I expect it to generate
delete from foo
.



 Comments   
Comment by Allen Rohner [ 06/Aug/11 3:46 PM ]

I can't find the button to edit the issue, this happened on ver. 0.0.6

Comment by Sean Corfield [ 10/Aug/11 1:46 AM ]

I actually think that would be rather dangerous. Imagine constructing the where parameters dynamically and accidentally passing in an empty sequence - you'd delete everything in the table.

I believe you can safely and deliberately get the effect you are looking for by doing either of these:

(delete-rows "foo" [true])
(delete-rows "foo" [1])
Comment by Sean Corfield [ 04/Sep/11 1:32 AM ]

Given the lack of response to my comment and the fact that there is a clear and explicit workaround, I'm resolving this, declined.





[JDBC-29] Performance improvement (Remove intermediate lazy sequence in resultset-seq) Created: 10/May/12  Updated: 01/Jun/16  Resolved: 10/May/12

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Jürgen Hötzel Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: patch,, performance

Attachments: Text File 0001-Don-t-create-intermediate-lazy-sequences-of-vectors.patch    

 Description   

This improves performance on large result sets by up to 30%.



 Comments   
Comment by Sean Corfield [ 10/May/12 12:09 PM ]

Nice catch!





[JDBC-66] Permit non-keyword labels Created: 17/Jul/13  Updated: 01/Jun/16  Resolved: 15/Sep/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Kyle Cordes Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None


 Description   

This line of code in jdbc.clj:

(map (comp keyword identifiers)))

... made sense before the newish as-arrays? feature. The results were always maps, and with maps it is idiomatic to use keywords as keys.

But when returning the results as nested lists, the first entry contains the field identifiers; these might very reasonably be something other than keywords.

I suggest that when the user passes in their own function to translate from fields to keys, that the user be able to make them keys whatever they like, perhaps plain strings or perhaps somethings else.

As a workaround, of one can just let this code keywordize the information, then unkeywordize it outside afterward.



 Comments   
Comment by Sean Corfield [ 15/Sep/13 4:10 PM ]

Given the workaround is easy - and the behavior to-date is well-known and well-documented - I'm not going to complicate the machinery to support non-keywords in array results here.





[JDBC-70] create-table if-not-exists Created: 12/Oct/13  Updated: 01/Jun/16  Resolved: 21/Nov/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Jacob Rask Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None


 Description   

sqlite has the IF NOT EXISTS clause to CREATE TABLE, but there doesn't seem to be an easy way to use it through jdbc.

It is usually an error to attempt to create a new table in a database that already contains a table, index or view of the same name. However, if the "IF NOT EXISTS" clause is specified as part of the CREATE TABLE statement and a table or view of the same name already exists, the CREATE TABLE command simply has no effect (and no error message is returned). An error is still returned if the table cannot be created because of an existing index, even if the "IF NOT EXISTS" clause is specified.

https://sqlite.org/lang_createtable.html



 Comments   
Comment by Sean Corfield [ 03/Nov/13 9:40 PM ]

DROP TABLE supports this too (MySQL at least, no doubt others). DDL should provide a consistent way to add stuff like this.

Comment by Sean Corfield [ 21/Nov/13 11:12 PM ]

Based on recent feedback about the DSLs that crept into 0.3.0, I'm going to decline this and recommend efforts go toward adding DDL support to Justin Kramer's HoneySQL.

Comment by Sean Corfield [ 21/Nov/13 11:13 PM ]

Won't fix - DSLs are not growing here





[JDBC-104] Prepare-statement should support passing an array of column names to return auto-generated keys Created: 29/Nov/14  Updated: 01/Jun/16  Resolved: 26/Aug/15

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Harri Ohra-aho Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

https://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html#prepareStatement(java.lang.String,%20java.lang.String[])

This would be essential for proper Oracle integration as the current use of java.sql.Statement/RETURN_GENERATED_KEYS only returns the Oracle RowID.



 Comments   
Comment by Sean Corfield [ 11/Mar/15 1:29 PM ]

This could be supported by allowing :return-keys to be a vector (of String) – in addition to just a boolean – but I will note that you can already create the PreparedStatement directly yourself and pass whatever arguments you want and then use that object in other calls (so I'm lowering the priority). It's a good enhancement suggestion tho'!

Comment by Sean Corfield [ 26/Aug/15 9:59 PM ]

Looks like several databases support this (not Derby tho') so I'm going to add this to prepare-statement.

Comment by Sean Corfield [ 26/Aug/15 10:03 PM ]

In 0.4.2, prepare-statement's :return-keys may be a vector of column names to return. Or just truthy, which will use the default return generated keys behavior, or falsey which will not return keys.

Comment by Tatu Tarvainen [ 25/Mar/16 5:31 AM ]

I think this should also be a parameter to db-do-prepared-return-keys function. Current implementation just uses :return-keys true.

Comment by Sean Corfield [ 25/Mar/16 5:42 PM ]

That's a low-level function that I wouldn't really expect you to use directly. Instead use prepare-statement to create a PreparedStatement – with a :return-keys vector of column names – and then pass that to db-do-prepared which, like most of java.jdbc, accepts a PreparedStatement in place of the SQL string.

Comment by Tatu Tarvainen [ 26/Mar/16 7:02 AM ]

I was under the impression that db-do-prepared-return-keys is part of the API as it is public. Yesql uses it directly.

If I'm reading it correctly, it looks to me that db-do-prepared only returns the update count or a batch of update counts. So it really isn't the same as db-do-prepared-return-keys.

Comment by Sean Corfield [ 26/Mar/16 9:29 PM ]

OK, it's been a while since I've had to dig around in the guts of java.jdbc and I tend to think in terms of query / execute! / insert! etc since that's where I work most of the time – and so, since insert! is the only piece that calls db-do-prepared-return-keys and it constructs its own SQL, I could see what you were after...

But db-do-prepared-return-keys is the only function that actually returns the keys so the problem is really that you cannot pass it a PreparedStatement like you can with db-do-prepared – and that's the hole in the API. If you could pass in a PreparedStatement (instead of just SQL and parameters), then you could do what you needed. Yes, I can see that being a valuable change. I'll create a ticket for it.

Comment by Sean Corfield [ 26/Mar/16 11:38 PM ]

See JDBC-115 which will update db-do-prepared-return-keys to allow a PreparedStatement. Hopefully that will address your concern.

Comment by Tatu Tarvainen [ 27/Mar/16 1:52 AM ]

Thanks, Sean. That should address my use case perfectly.

Comment by Sean Corfield [ 27/Mar/16 12:38 PM ]

I've kicked off the release process for 0.5.0 – should be on Maven by tomorrow.





[JDBC-130] hyphens not allowed for column and table names Created: 18/May/16  Updated: 01/Jun/16  Resolved: 18/May/16

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Trivial
Reporter: Tim Simpson Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None


 Description   

This works:

(require '[clojure.java.jdbc :as sql])
(def db {:classname "org.sqlite.JDBC", :subprotocol "sqlite", :subname ":memory:"})
(sql/execute! db [(apply sql/create-table-ddl :table_1 [[:column_1 :text][:column_2 :text]])])

this doesn't:

(sql/execute! db [(apply sql/create-table-ddl :table-1 [[:column-1 :text][:column-2 :text]])])

SQLException [SQLITE_ERROR] SQL error or missing database (near "-": syntax error) org.sqlite.DB.newSQLException (DB.java:383)

It looks like create-table-ddl is creating the text "CREATE TABLE table-1 (column-1 text, column-2 text)". Should it be quoting the table names?



 Comments   
Comment by Sean Corfield [ 18/May/16 2:38 PM ]

You need to use :entities to provide a quoting strategy per the documentation:

http://clojure.github.io/java.jdbc/#clojure.java.jdbc/create-table-ddl

Comment by Tim Simpson [ 18/May/16 3:40 PM ]

Thanks!





[JDBC-19] Consolidate jdbc / jdbc.internal namespaces (post 0.1.0) Created: 12/Oct/11  Updated: 01/Jun/16  Resolved: 23/Apr/12

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Since clojure.java.jdbc.internal is not intended to be called externally, merge it into clojure.java.jdbc and make its members private. Then merge simplify functions where appropriate.



 Comments   
Comment by Sean Corfield [ 23/Apr/12 3:22 AM ]

Fixed in 0.2.0 release.





[JDBC-30] Consider to make do-prepared-return-keys public Created: 10/May/12  Updated: 01/Jun/16  Resolved: 10/May/12

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Roman Scherer Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Libraries like Korma, that build upon clojure.java.jdbc depend
on, or have use cases for do-prepared-return-keys. It's a useful
function and we would like to use it. As discussed over here:

https://github.com/ibdknox/Korma/pull/60#issuecomment-5630604



 Comments   
Comment by Sean Corfield [ 10/May/12 2:12 PM ]

Made do-prepared-return-keys public. Will cut a new build.





[JDBC-36] using exisiting java based connection pool in clojure Created: 24/Jul/12  Updated: 01/Jun/16  Resolved: 12/Aug/12

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Amir Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None


 Description   

We have a connection pool in Java and would like to use the same for our complete project where code is mixture of Clojure/Java.

(def db (ConnectionManager/getConnection)) ; #<ConnectionHandle com.jolbox.bonecp.ConnectionHandle@53371566> connection type we get after excuting this line
sql/with-connection db
(sql/with-query-results res ["select * from DummyTable where rownum <=100"] (vec res)
(map #(println (class (val %)) " " (key %)) (first res))
)
)



 Comments   
Comment by Amir [ 24/Jul/12 3:00 AM ]

If we execute the above (with connection - with-query results) i get the following exception

IllegalArgumentException db-spec com.jolbox.bonecp.ConnectionHandle@53371566 is missing a required parameter clojure.java.jdbc/get-connection (jdbc.clj:192)

Comment by Sean Corfield [ 24/Jul/12 12:34 PM ]

Use the :datasource option to define your db-spec:

(def db {:datasource bonecpDataSourceObject})

I looked at the BoneCP docs and I don't see a ConnectionManager class but if you define the db-spec based on the BoneCPDataSource object, java.jdbc will call getConnection on that object to get a working connection for operations.

Comment by Sean Corfield [ 31/Jul/12 7:49 PM ]

Amir, have you had a chance to try my suggestion?

Comment by Amir [ 01/Aug/12 2:20 AM ]

Hi Sean,

No, we moved to clojure based pooling in a while. We thought its better to have all stuff in clojure rather than having java+clojure. But i can give it a try today.

Thanks for your suggestion. will keep you posted.

Comment by Sean Corfield [ 12/Aug/12 9:35 PM ]

I'll close this out then Amir, thanx.

Comment by Sean Corfield [ 12/Aug/12 9:36 PM ]

I believe there is a way to access the the BoneCP connection pool directly and use it with java.jdbc as-is, however it is no longer an issue for the creator of this ticket.





[JDBC-69] Allow use on Android Created: 20/Sep/13  Updated: 01/Jun/16  Resolved: 03/Nov/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Zach Oakes Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File jdbc.clj.patch    
Patch: Code

 Description   

This patch uses the when-available macro from match.numeric-tower to check for the existence of javax.naming.InitialContext before using it. Since the javax.naming.* namespace is not available on Android, its current use prevents this library from running on the platform.






[JDBC-78] clojure.java.jdbc.sql missing in [org.clojure/java.jdbc "0.3.0-beta2"] Created: 02/Dec/13  Updated: 01/Jun/16  Resolved: 02/Dec/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Justin Woo Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None


 Description   

Specifying [org.clojure/java.jdbc "0.3.0-beta2"] in project.clj, running any leiningen task throws this error:

Unable to find source-code formatter for language: terminal output. Available languages are: javascript, sql, xhtml, actionscript, none, html, xml, java
#<FileNotFoundException java.io.FileNotFoundException: Could not locate clojure/java/jdbc/sql__init.class or clojure/java/jdbc/sql.clj on classpath: >

Reverting to 0.3.0-beta1 works normally.



 Comments   
Comment by Justin Woo [ 02/Dec/13 3:32 PM ]

Never mind, I found http://corfield.org/blog/post.cfm/clojure-java-jdbc-0-3-0-beta2

I guess the code examples are just outdated on github...

How do I go about closing this issue?

Comment by Sean Corfield [ 02/Dec/13 4:28 PM ]

Where did you find code examples that still refer to that deprecated (& now removed) namespace?

Comment by Sean Corfield [ 02/Dec/13 4:30 PM ]

Oops! Right there on the README page... darn, thought I'd caught all of those...

Comment by Sean Corfield [ 02/Dec/13 4:38 PM ]

README has been updated!





[JDBC-93] Add support for read-only transactions Created: 26/Mar/14  Updated: 01/Jun/16  Resolved: 30/Jun/14

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Gerrit Hentschel Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None
Environment:

[org.clojure/java.jdbc "0.3.3"]



 Description   

Currently it is possible to set the transaction isolation only (db-transaction*). In our case we use SERIALIZABLE isolation, so it makes sense to mark read-only transactions using Connection/setReadOnly.

Thanks!



 Comments   
Comment by Sean Corfield [ 26/Mar/14 8:56 PM ]

Can you explain a bit more about how you'd expect this to work? Perhaps show me the sort of code / API you'd want to write / use for this?

Comment by Gerrit Hentschel [ 27/Mar/14 3:07 AM ]

Sure, actually I created a patch that describes how I would like to use it, but when trying to attach the file I get a 500-Internal Server Error.

I hope it is okay to just paste it here:

From 4a4ea3d478f7ddb717764499b7d7c3e5efb5caee Mon Sep 17 00:00:00 2001
From: Gerrit Hentschel <gerrit@doo.net>
Date: Thu, 27 Mar 2014 08:57:54 +0100
Subject: [PATCH] add support for read-only transactions

---
 src/main/clojure/clojure/java/jdbc.clj | 21 ++++++++++++++-------
 1 file changed, 14 insertions(+), 7 deletions(-)

diff --git a/src/main/clojure/clojure/java/jdbc.clj b/src/main/clojure/clojure/java/jdbc.clj
index 01c4f21..47a6d74 100644
--- a/src/main/clojure/clojure/java/jdbc.clj
+++ b/src/main/clojure/clojure/java/jdbc.clj
@@ -559,16 +559,20 @@ compatibility but it will be removed before a 1.0.0 release." }
   The isolation option may be :none, :read-committed, :read-uncommitted,
   :repeatable-read, or :serializable. Note that not all databases support
   all of those isolation levels, and may either throw an exception or
-  substitute another isolation level."
-  [db func & {:keys [isolation]}]
+  substitute another isolation level.
+  The read-only? option puts the transaction in read-only mode."
+  [db func & {:keys [isolation read-only?]}]
   (if (zero? (get-level db))
     (if-let [^java.sql.Connection con (db-find-connection db)]
       (let [nested-db (inc-level db)
             auto-commit (.getAutoCommit con)
-            old-isolation (.getTransactionIsolation con)]
+            old-isolation (.getTransactionIsolation con)
+            old-read-only? (.isReadOnly con)]
         (io!
          (when isolation
            (.setTransactionIsolation con (isolation isolation-levels)))
+         (when read-only?
+           (.setReadOnly con true))
          (.setAutoCommit con false)
          (try
            (let [result (func nested-db)]
@@ -583,9 +587,11 @@ compatibility but it will be removed before a 1.0.0 release." }
              (db-unset-rollback-only! nested-db)
              (.setAutoCommit con auto-commit)
              (when isolation
-               (.setTransactionIsolation con old-isolation))))))
+               (.setTransactionIsolation con old-isolation))
+             (when read-only?
+               (.setReadOnly con old-read-only?))))))
       (with-open [^java.sql.Connection con (get-connection db)]
-        (db-transaction* (add-connection db con) func :isolation isolation)))
¯+        (db-transaction* (add-connection db con) func :isolation isolation :read-only? read-only?)))
     (try
       (func (inc-level db))
       (catch Exception e
@@ -595,8 +601,9 @@ compatibility but it will be removed before a 1.0.0 release." }
   "Evaluates body in the context of a transaction on the specified database connection.
   The binding provides the database connection for the transaction and the name to which
   that is bound for evaluation of the body. The binding may also specify the isolation
-  level for the transaction, via the :isolation option.
-  (with-db-transaction [t-con db-spec :isolation level]
+  level for the transaction, via the :isolation option and/or set the transaction to
+  read-only, via the :read-only? option.
+  (with-db-transaction [t-con db-spec :isolation level :read-only? true]
     ... t-con ...)
   See db-transaction* for more details."
   [binding & body]
-- 
1.8.3.4 (Apple Git-47)
Comment by Sean Corfield [ 27/Mar/14 11:35 AM ]

Thanks Gerrit. That seems nice and simple. I can't take the patch exactly as-is, because you do not appear to be a formal contributor yet - http://clojure.org/contributing - but this definitely points me in the right direction to create tests and implement something that should provide the functionality you're after.

Comment by Gerrit Hentschel [ 28/Mar/14 3:30 AM ]

Great! Glad it helped.
I will take care of becoming a formal contributor, but feel free to use that patch as you see fit.
Thanks Sean.

Comment by Sean Corfield [ 30/Jun/14 9:46 PM ]

Will be in 0.3.5





[JDBC-109] :port should default to 5432 for PostgreSQL Created: 11/Jun/15  Updated: 01/Jun/16  Resolved: 27/Jul/15

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

We default the port for MySQL and SQL Server. We should do it for PostgreSQL too.

We should probably clean up the handling of Apache Derby, SQLite, H2, and HSQLDB with :dbtype so the host/port are omitted.



 Comments   
Comment by Sean Corfield [ 27/Jul/15 1:19 AM ]

Port defaulted and Derby, SQLite and HSQLDB updated to support dbtype / dbname.





[JDBC-115] Allow db-do-prepared-return-keys to accept a PreparedStatement Created: 26/Mar/16  Updated: 01/Jun/16  Resolved: 26/Mar/16

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

See JDBC-104 for the justification of this.



 Comments   
Comment by Sean Corfield [ 26/Mar/16 11:38 PM ]

Will be in 0.5.0 release (which drops Clojure 1.3 support).





[JDBC-116] Clojure Postgres SQL doesn't Quote mixed case tables Created: 30/Mar/16  Updated: 01/Jun/16  Resolved: 30/Mar/16

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Justin Thomas Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None


 Description   

(j/insert! pg-db :SomeTable
{:Id 47
:date "2015-02-02"
:score 0.12
:updatedAt "2015-02-01"})

;; throws an error about not finding sometable



 Comments   
Comment by Justin Thomas [ 30/Mar/16 3:08 PM ]

In this case SomeTable is a table in Postgres.

select * from "SomeTable" --works

select * from SomeTable --interpreted as select * from sometable;

Comment by Justin Thomas [ 30/Mar/16 3:11 PM ]

You have to create the table with ""

So:
create table "SomeTable" (bar text);

Comment by Sean Corfield [ 30/Mar/16 3:20 PM ]

You need to provide :entities (quoted \") in your call to insert!.

Comment by Sean Corfield [ 30/Mar/16 3:21 PM ]

Not a bug: the library provides :entities and quoting strategies to support this already.

Comment by Sean Corfield [ 30/Mar/16 3:24 PM ]

For comparison, MySQL needs (quoted \`) and I think SQL Server needs open/close square brackets – quote can accept a vector pair of delimiters. Quoting is necessary when you have a table or column name that matches a reserved word in SQL too.





[JDBC-68] Optional 'identifer' kwarg should be responsible to keywordize row keys Created: 14/Aug/13  Updated: 01/Jun/16  Resolved: 15/Sep/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Shantanu Kumar Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: enhancement
Environment:

Clojure (JVM)


Attachments: File identifier-should-keywordize.diff    
Patch: Code

 Description   

Optional 'identifier' kwarg only transforms the string column name to another string; the row keys are always keywordized. In some result parsing cases, where a different transformation may be required the default keywordization would be unnecessary.

A possible solution is to expand the responsibility of 'identifier' to keywordize (default behavior) when required. This solution is attached as a patch to this issue, and passes all tests.



 Comments   
Comment by Sean Corfield [ 14/Aug/13 3:34 PM ]

The problem with this approach is that anyone using the `identifiers` approach - overriding the default behavior - will now have broken code. I'll give it more thought. I think a :keyword? true|false argument, passed thru everything is the only way to maintain backward compatibility but that's a lot of churn so breaking backward compatibility may be the less painful approach (esp. since 0.3.0 is a whole new API anyway).

Comment by Sean Corfield [ 15/Sep/13 4:15 PM ]

java.jdbc has always converted column identifiers to keywords and having keys in maps be keywords is idiomatic in Clojure. If some folks want non-keywords, they can post-process the results.

Adding another option to turn keywords on and off will make the API more complex and breaking compatibility by suddenly not turning the column names into keywords, and forcing a lot of existing code to change, is not acceptable at this point.





[JDBC-108] Remove Double Counting of Options in insert-sql Created: 18/May/15  Updated: 01/Jun/16  Resolved: 18/May/15

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Earl St Sauver Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File doubleCountEstsauver.patch    
Patch: Code

 Description   

insert-sql ~(line 965) iterates through the options to count the number of rows + Col/Vecs.



 Comments   
Comment by Sean Corfield [ 18/May/15 11:23 AM ]

Thanks Earl. Since you're not on the clojure.org/contributing list, I can't use the patch directly but I went ahead and removed the duplicated calls to `count` that you pointed out.





[JDBC-60] insert! is not documented to always return a sequence of generated keys Created: 26/May/13  Updated: 01/Jun/16  Resolved: 15/Sep/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Trivial
Reporter: Christian Sperandio Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None
Environment:

OS X Clojure 1.5.1 clojure.jdbc 0.3-alpha4



 Description   

The insert! function always returns a sequence of keys even if only one recored is inserted.



 Comments   
Comment by Sean Corfield [ 26/May/13 9:58 AM ]

I think this is more consistent than returning a sequence sometimes and a single value other times. If you want the first key, you know you're always safe to call `first` on the result. I'll probably just update the docstring to make that clearer what happens for different circumstances.

Comment by Christian Sperandio [ 26/May/13 10:21 AM ]

I agree with you about the consistency to return always a sequence.
I was afraid about a future change (closer of the insert-record function) that would break the current behavior.

Comment by Sean Corfield [ 15/Sep/13 4:12 PM ]

This is a documentation issue and the documentation is now editable by the community in its new home on http://clojure-doc.org





[JDBC-10] Attempts to return generated keys on batch inserts fails on PostgreSQL and MS SQL Server Created: 18/Jun/11  Updated: 01/Jun/16  Resolved: 27/Jun/11

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

From andreasmk2:

"JDBC driver for PostgreSQL and ms-sql does not support returning of generated keys from statements executed in batch. The SQL exception was thrown when 'do-prepared*' was called with 'return-keys' set to true. Currently only the function 'insert-values' calls a new method 'do-single-prepared-get-keys*' when single relation is inserted."



 Comments   
Comment by Sean Corfield [ 27/Jun/11 5:54 PM ]

I just committed a potential fix for this. It's available in the Sonatype snapshots repo.





[JDBC-28] prepared-statement* behaves different on OpenJDK 1.6 and OS X Java 1.6. Created: 30/Mar/12  Updated: 01/Jun/16  Resolved: 31/Mar/12

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Jeroen van Dijk Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None
Environment:

Ubuntu
OpenJDK:
java version "1.6.0_22"
OpenJDK Runtime Environment (IcedTea6 1.10.6) (6b22-1.10.6-0ubuntu1)
OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode)

OS X Java
java version "1.6.0_29"
Java(TM) SE Runtime Environment (build 1.6.0_29-b11-402-10M3527)
Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02-402, mixed mode)

Relevant clojure libraries
[org.clojure/clojure "1.3.0"]
[korma "0.3.0-beta9"]
[org.clojure/java.jdbc "0.1.3"]
[postgresql/postgresql "9.1-901.jdbc3"]



 Description   

I'm using the Clojure library Korma to generate SQL for Postgres, which depends on clojure.java.jdbc. Recently I added a custom 'upsert' query. Using it on normal java 1.6 is fine, but on openJDK it breaks due to the use of 'java.sql.Statement/RETURN_GENERATED_KEYS'. This makes the Postgres driver add a seemingly random "RETURNING *" at the end of the query.

On normal Java 1.6 java.sql.Statement/RETURN_GENERATED_KEYS doesn't exist and then a exception is caught and everything works as expected. Sounds like the world up side down, but I'm probably missing the rational behind this.

The line that causes the weird behavior is here: https://github.com/clojure/java.jdbc/blob/master/src/main/clojure/clojure/java/jdbc/internal.clj#L315

The code that showcases the 'bug' is here:
https://gist.github.com/2e8a3d55d80707ce79e0

(Sorry for the monkey patching in the gist, if there is a better way I would love to hear it)

I guess a naive patch would be to remove the java.sql.Statement/RETURN_GENERATED_KEYS line, but there must be a reason for its existence I assume.



 Comments   
Comment by Sean Corfield [ 31/Mar/12 1:30 AM ]

RETURN_GENERATED_KEYS is required for inserts to return the generated keys. Since this works on Sun's Java 6 and with all intended databases, and is per spec, it is not going away.

Right now OpenJDK / PostgreSQL is not a supported combination, sorry.





[JDBC-39] Boolean - BIT(1) - columns are returned as byte[] via some DB connections Created: 10/Sep/12  Updated: 01/Jun/16  Resolved: 12/Sep/12

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None
Environment:

Specifically using the MySQL connector 5.0.x or 5.1.x against a Tungsten Connector proxy



 Description   

The root cause appears to be that rs.getObject( i ) seems to take the field flags into effect over the basic java.sql.Types values.

In this case a BIT(1) column comes back from MySQL as type 16 (BOOLEAN), flags UNSIGNED. From Tungsten it comes back as type 16, flags BINARY BLOB UNSIGNED.

The resultset-seq construction code is pretty cavalier about using just getObject() instead of more specific types so this might be the start of a smarter approach to that.



 Comments   
Comment by Sean Corfield [ 10/Sep/12 5:51 PM ]

Further update: this looks to be a configuration or version issue with the Tungsten Connector setup as we've been able to repro the problem in one environment but it works as expected in a newer environment.

May still go ahead with this change anyway since I believe it's a good change for robustness, but it looks lower priority right now.

Comment by Sean Corfield [ 10/Sep/12 6:51 PM ]

Pushed a fix for this but would like more hands on this in SNAPSHOT form before declaring it resolved.

Comment by Sean Corfield [ 10/Sep/12 7:46 PM ]

Tungsten's Connector defeats this fix. Even tho' the fields have an underlying mysqlType value of 16 (BOOLEAN), (.getColumnType) returns -3 (byte[] presumably) so Boolean is not forced as I had hoped.

Comment by Sean Corfield [ 12/Sep/12 7:51 PM ]

The underlying behavior is a quirk of the Tungsten Connector configuration which we believe we've successfully corrected. Given the result of (.getColumnType) is -3, I'm not sure anything could be done in code to address this, so I'm going to resolve this as invalid.

Comment by Sean Corfield [ 12/Sep/12 7:52 PM ]

Not a code issue.

Comment by Sean Corfield [ 06/Aug/13 6:16 PM ]

In case anyone else runs into this with Tungsten: BIT fields are handled in a version-specific way and the logic assumes three segments to the version number. Tungsten's default configuration has only two segments in the version number so the BIT handling logic barfs. The workaround is to change the Tungsten configured version to match the actual MySQL version you are proxying (e.g., 5.5.30-percona instead of 5.1-tungsten).





[JDBC-82] Fix Android release build issue Created: 13/Dec/13  Updated: 01/Jun/16  Resolved: 13/Dec/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Zach Oakes Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None
Environment:

Ubuntu x64, OpenJDK 7


Attachments: Text File jdbc.clj.patch    
Patch: Code

 Description   

This patch fixes one remaining issue related to using this library in Android apps. While compiling for Android is now possible, there is an error thrown during release builds. This occurs during the "dexing" phase, when it converts from JVM to Dalvik bytecode.

It fails when it gets to the class file generated by the IResultSetReadColumn protocol definition. This is because it makes duplicate use of the "_" symbol to represent unused arguments. This patch simply renames the symbols so they are unique.



 Comments   
Comment by Sean Corfield [ 13/Dec/13 1:07 AM ]

Updated. Will be in rc2/gold.





[JDBC-101] Don't make column names unique when `as-arrays?` is true Created: 16/Oct/14  Updated: 01/Jun/16  Resolved: 28/Oct/14

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Ragnar Dahlen Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File jdbc-101.patch    

 Description   

result-set-seq ensures that column name keys are unique by suffixing identical column names with an incrementing integer. This is important and required when returning maps as key collisions are possible (result sets can easily contain multiple columns with same label, in different positions) and would yield surprising results.

However, when result-set-seq is called with :as-arrays? true, rows are returned as vectors with the columns ordered as in the SQL query. The first row returned is a vector of column names. In the current implementation, column name keys are still made unique.

I find this behaviour unhelpful as it introduces an unnecessary difference between column names as returned by the query and what they are called in the column name row.

I propose the following change:

When as-arrays? is true, don't make column names unique. This is obviously breaking change if you rely on the uniquified column names (unlikely?). If this is considered a too big risk, I propose we introduce an option to toggle this behaviour uniquify-column-names?, defaulting to true.



 Comments   
Comment by Ragnar Dahlen [ 16/Oct/14 4:49 AM ]

Attached initial go at a patch, taking the less conservative approach and changing the current behaviour.

Comment by Sean Corfield [ 16/Oct/14 10:08 AM ]

I'd want to survey some of the java.jdbc users who rely on as-arrays? before changing the default behavior but I can see the value in suppressing the uniqueness behavior.

Comment by Sean Corfield [ 28/Oct/14 6:06 PM ]

I think the approach I'm going to take here is to allow :as-arrays? to be either true, false, or :cols-as-is. So :as-arrays? true will continue to work as it does now but if you specify :as-arrays? :cols-as-is it will work like :as-arrays? true but also omit the step that makes column names unique.

Comment by Sean Corfield [ 28/Oct/14 6:16 PM ]

:as-arrays? can now be true, false, or :cols-as-is. The default remains false. Specifying :cols-as-is behaves like true but also suppresses the column uniqueness logic.





[JDBC-13] Add unit tests for generated keys Created: 25/Jul/11  Updated: 01/Jun/16  Resolved: 07/May/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Minor
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Need to test that generated keys come back as expected for various databases - and that DBs which can't generate keys let you use transactions and select identity kind of SQL stuff.



 Comments   
Comment by Sean Corfield [ 07/May/13 9:28 PM ]

This has sort of been handled along the way with functions used in the test suite to reflect how various databases handle key generation / return. Surprisingly few seem to reliably return keys on an insert





[JDBC-22] Infer :classname from :subprotocol Created: 17/Oct/11  Updated: 01/Jun/16  Resolved: 04/Jan/12

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Phil Hagelberg Assignee: Phil Hagelberg
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File 0001-Infer-classname-from-subprotocol.patch    
Patch: Code and Test
Waiting On: Sean Corfield

 Description   

Having to specify both :classname and :subprotocol seems like needless boilerplate. Only :subprotocol should be required.



 Comments   
Comment by Phil Hagelberg [ 17/Oct/11 1:48 PM ]

This should take care of it. Updated the tests to remove the inferred value.

Comment by Sean Corfield [ 04/Jan/12 11:53 PM ]

Sorry, I should have closed this out ages ago - when the patch was applied.





[JDBC-26] Add sqlite3 support Created: 13/Feb/12  Updated: 01/Jun/16  Resolved: 23/Apr/12

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Nelson Morris Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File mssql-error.log     Text File sqlite3-support.patch     Text File sqlite_transaction_test.patch    
Patch: Code and Test

 Description   

sqlite3 has some problems:

1) PreparedStatements.addBatch does not do anything without some parameters, so executeBatch doesn't do anything.
2) Transaction support appears to require closing the ResultSet object from generatedKeys.

I've only tested the included patch with the default test dbs and sqlite3.



 Comments   
Comment by Sean Corfield [ 14/Feb/12 1:01 AM ]

This will break things - see JDBC-16 which required that .addBatch be called when there are no param-groups. I'm not going to reject this out of hand but patches for new database support need testing against MySQL, PostgreSQL and MS SQL Server.

Comment by Nelson Morris [ 14/Feb/12 1:48 PM ]

Test suite runs correctly for TEST_DBS=mysql,postgres,derby,hsqldb,sqlite3 and TEST_DBS=mysql-str,postgres-str. This just leaves MS SQL server, which I do not have the ability to run.

Regarding JDBC-16, when I revert to just using .executeBatch for the no param-groups case I can see the errors produced in the test-suite. Using .executeUpdate for the no params-group case continues to fix these errors (note MS SQL server untested). I do not see any other information in the ticket that would explain why using .executeUpdate instead of .addBatch/.executeBatch for the no params-group case would break JDBC-16. Is there a reason I am missing?

Comment by Nelson Morris [ 17/Feb/12 10:05 PM ]

I ran the test suite against an ec2 instance with MS SQL server 2008 R2. Current test suite on master fails with attached mssql-error.log. Using patch produces the same output.

Please let me know if there is anything that you would like changed.

Comment by Sean Corfield [ 17/Feb/12 10:14 PM ]

Thanx Nelson. I've been a bit busy with work this week but I'll try to look at this at the weekend. I appreciate your thoroughness on testing this!

Comment by Sean Corfield [ 29/Feb/12 11:33 PM ]

Integrated parts of the patch. SQLite 3 is now supported. Thanx!

Comment by Nelson Morris [ 01/Mar/12 12:57 AM ]

I still get the transaction issue (part 2 from original description) when clojure.java.jdbc tries to return the generated keys, in a transaction, using sqlite. https://gist.github.com/1947746

I believe this is because the ResultSet from .getGeneratedKeys never gets .close called, so sqlite keeps the db locked and can't rollback. Would you like me to make a new issue?

Comment by Sean Corfield [ 01/Mar/12 1:03 AM ]

The tests pass locally (Mac OS X) and on the Clojure build server (Linux) but you're right that I didn't include the close call on the record set.

I didn't include your additional test which I'm guessing is the key to highlighting the errant behavior for SQLite?

(I couldn't actually apply your patch because I'd spent some time getting all the tests passing for MS SQL Server with both the Microsoft and jTDS drivers)

Comment by Nelson Morris [ 01/Mar/12 1:19 AM ]

Attached patch that fails with `mvn test`.

Comment by Sean Corfield [ 23/Apr/12 3:35 AM ]

I believe this was fixed in 0.1.3 (at the end of February). As far as I can tell, all of the patch components are applied. Specifically, your transaction test is included in the test suite and passes.





[JDBC-83] Test name test-update-or-insert-values appears twice in namespace clojure.java.test-jdbc Created: 15/Dec/13  Updated: 01/Jun/16  Resolved: 16/Dec/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Minor
Reporter: Andy Fingerhut Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

When the same name is used for more than one deftest statement, the first one's tests will never be run, reducing test coverage. Recommend changing the name of one of these two deftests



 Comments   
Comment by Sean Corfield [ 16/Dec/13 12:34 AM ]

Thanx Andy. This happened when I added a whole bunch of -with-db tests and then (just recently) started renaming them as part of the cleanup.

Comment by Sean Corfield [ 16/Dec/13 3:17 PM ]

Looks like it was just missed when I added the -with-isolation tests more recently.





[JDBC-91] Issue with H2 "script" command Created: 20/Feb/14  Updated: 01/Jun/16  Resolved: 26/Aug/15

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Minor
Reporter: Mike Anderson Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

While trying to execute an H2 "script" statement as follows:

(defn script []
(jdbc/execute! db [(str "script to '" "dbscript.txt" "';")]))

I get the following error message:

org.h2.jdbc.JdbcSQLException: Method is not allowed for a query. Use execute or executeQuery instead of executeUpdate; SQL statement:
script to 'dbscript.txt'; [90001-175]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:332)
at org.h2.message.DbException.get(DbException.java:172)
at org.h2.message.DbException.get(DbException.java:149)
at org.h2.message.DbException.get(DbException.java:138)
at org.h2.command.Prepared.update(Prepared.java:200)
at org.h2.command.CommandContainer.update(CommandContainer.java:79)
at org.h2.command.Command.executeUpdate(Command.java:253)
at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:154)
at org.h2.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:140)
at clojure.java.jdbc$db_do_prepared$fn__339.invoke(jdbc.clj:639)
at clojure.java.jdbc$db_transaction_STAR_.doInvoke(jdbc.clj:514)
at clojure.lang.RestFn.invoke(RestFn.java:425)
at clojure.java.jdbc$db_do_prepared.doInvoke(jdbc.clj:638)
at clojure.lang.RestFn.invoke(RestFn.java:442)
at clojure.java.jdbc$execute_BANG_$execute_helper__368.invoke(jdbc.clj:738)
at clojure.java.jdbc$execute_BANG_.doInvoke(jdbc.clj:742)

It's possible that I'm just doing something stupid as I don't really knwo the internals of java.jdbc, but it looks like this is a bug with the type of query being constructed in "db_do_prepared"?



 Comments   
Comment by Sean Corfield [ 28/Feb/14 1:44 AM ]

Mike, since "script to file.txt;" is not a query or update, I would expect db-do-commands to be the right function to call here, not execute! - can you try that and let me know if it works?

Comment by Mike Anderson [ 14/Apr/14 10:27 PM ]

Hi Sean, I still seem to get an error:

(j/db-do-commands db (str "SCRIPT TO '" "dbscript.txt" "';"))
=>
JdbcBatchUpdateException Method is not allowed for a query. Use execute or executeQuery instead of executeUpdate; SQL statement:
SCRIPT TO 'dbscript.txt'; [90001-175] org.h2.jdbc.JdbcStatement.executeBatch (JdbcStatement.java:672)

Comment by Mike Anderson [ 14/Apr/14 10:33 PM ]

Interestingly, this seems to work:

(j/query db [(str "SCRIPT TO '" "dbscript.txt" "';")])

So I have a good workaround at least. Not sure what the right overall solution is - maybe some clearer error messages or docstrings regarding what is / isn't allowed with each type of query would be good?

Comment by Sean Corfield [ 14/Apr/14 11:03 PM ]

Well, I don't have much control over what the JDBC driver itself accepts or doesn't accept for various commands but perhaps introducing a variant of db-do-commands that allows you to specify exactly what method on the PreparedStatement gets called would be helpful here. Or maybe you could just create a PrepatedStatement object - there's an API for that - and run non-standard commands thru that directly?

Comment by Mike Anderson [ 15/Apr/14 11:02 PM ]

Well I'm not a JDBC expert so don't want to give too strong an opinion on API design

My feeling is that some kind of variant of `db-do-commends`, perhaps with keyword arguments (which override a sensible default) would be useful. But that is only a hunch... and I'm not sure if it conflicts with any other assumptions in the design. And I certainly wouldn't want to complicate the API design if this is just a corner case / H2 quirk.

Perhaps just docstring changes that make clear the usage of executeQuery vs. execute vs. executeBatch etc. would be sufficient? I think that was what was fundamentally causing me some confusion.

Anyway I have a good workaround now. So feel free to close / re-purpose this issue as you wish!

Comment by Sean Corfield [ 28/Oct/14 6:48 PM ]

Given that Mike has a workaround, lowering priority to Minor. Probably best addressed by docstring updates at this point.

Comment by Sean Corfield [ 26/Aug/15 11:14 PM ]

Resolved by improving docstrings and adding official support for H2 (and testing against it!).

The specific solution for this script issue is to use (query ...) because that's what the underlying H2 driver wants!





[JDBC-112] avoid reflection on .prepareStatement Created: 15/Sep/15  Updated: 01/Jun/16  Resolved: 15/Sep/15

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Michael Blume Assignee: Michael Blume
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File JDBC-112-v1.patch     Text File JDBC-112-v2.patch    
Patch: Code

 Description   

There's only one piece of reflection in java.jdbc, it's a little tricky to remove, but it can be done.



 Comments   
Comment by Sean Corfield [ 15/Sep/15 6:55 PM ]

Thank you! We're due for 0.4.2 "soon" so I'll add this in.

Comment by Sean Corfield [ 15/Sep/15 7:41 PM ]

Thank you!





[JDBC-113] Remove redundant type-hints Created: 15/Sep/15  Updated: 01/Jun/16  Resolved: 15/Sep/15

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Michael Blume Assignee: Michael Blume
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File JDBC-113-v1.patch    
Patch: Code

 Description   

There's some type-hints on local instances of java.sql.Connection that aren't actually necessary because get-connection's return type is hinted – the compiler can already infer the types of the locals.



 Comments   
Comment by Sean Corfield [ 15/Sep/15 6:53 PM ]

Thank you. I suspect that's historical as code has been reorganized. Patch looks nice and clean.

Comment by Sean Corfield [ 15/Sep/15 7:41 PM ]

Thank you!





[JDBC-114] ex-info thrown inside with-db-transaction is not preserved Created: 23/Nov/15  Updated: 01/Jun/16  Resolved: 23/Nov/15

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Christopher Jeris Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None
Environment:

Clojure 1.7.0, java.jdbc 0.3.7 (same issue exists in 0.4.2 but my active environment uses 0.3.7)



 Description   

I have a with-db-transaction block surrounding several queries, where in case one of them fails I would like to attach some locally known metadata about the individual query using ex-info. Unfortunately the clojure.java.jdbc/db-transaction* macro uses clojure.java.jdbc/throw-non-rte to strip off all the layers of RuntimeException around a thrown SQLException. Since clojure.lang.ExceptionInfo is a RuntimeException, this means that my metadata does not survive to the outside of the with-db-transaction macro, unless I defeat the intent of JVM exception handling by passing the original SQLException somewhere other than the cause field of my wrapper exception.

  • Is this RuntimeException stripping behavior mentioned in the public facing documentation for clojure.java.jdbc?
  • Is it feasible from a design point of view to suppress this behavior, either on a switch, or for ExceptionInfo only, or both?

(I am happy to supply a patch myself if you judge this an appropriate enhancement.)



 Comments   
Comment by Sean Corfield [ 23/Nov/15 5:07 PM ]

The original reason for this was that the Clojure runtime auto-wrapped exceptions so your java.jdbc code might throw a SQLException but you couldn't catch it directly because it got turned into a RuntimeException instead. That was added a very long time ago. I'd have to see when Clojure stopped wrapping RTEs to see whether that could be removed – otherwise I'd have to special case ExceptionInfo in there instead.

Comment by Sean Corfield [ 23/Nov/15 5:13 PM ]

It relates to this http://dev.clojure.org/jira/browse/CLJ-855 so it looks like I could remove it if I dropped support for Clojure 1.3.0. I only dropped support for 1.2.x in September but that's definitely a possibility.

Comment by Sean Corfield [ 23/Nov/15 5:24 PM ]

I removed throw-non-rte and my tests do not fail for Clojure 1.3.0 which makes me think my tests are not comprehensive enough. I want to see if I can reproduce the wrapping for 1.3.0 in a test before I try to close this.

Comment by Sean Corfield [ 23/Nov/15 5:49 PM ]

Test added that reproduces the wrapped exception behavior and it occurs in 1.3.0 only so I'll drop support for that but bump the version to 0.5.0.

Comment by Sean Corfield [ 23/Nov/15 5:57 PM ]

Fixed in upcoming 0.5.0 (which drops Clojure 1.3.0 support).

Comment by Christopher Jeris [ 24/Nov/15 9:06 AM ]

That was fast! Thank you very much!

Comment by Sean Corfield [ 24/Nov/15 10:40 AM ]

Well, I don't know yet when 0.5.0 will be released, but you can try out the snapshot from Sonatype in the meantime if you want. Since I'm going from 0.4.x to 0.5.0, I may well make some other breaking changes (such as removing the deprecated namespace and maybe tackling the reducer stuff – which would also drop support for 1.4 I think).





[JDBC-103] Recent links in changelog/readme use https for dev.clojure Created: 29/Oct/14  Updated: 01/Jun/16  Resolved: 29/Oct/14

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Trivial
Reporter: John Walker Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File fixlinks.patch    
Patch: Code

 Description   

Some of the recent links to jira issues in the changelog and readme point to https://dev.clojure.org, but there is no https support on http://dev.clojure.org. This replaces all https://dev.clojure links with http://dev.clojure



 Comments   
Comment by Sean Corfield [ 29/Oct/14 4:20 PM ]

Oops, not sure how those cropped in. Thank you!

Comment by John Walker [ 29/Oct/14 4:23 PM ]

You're welcome!

Comment by Sean Corfield [ 29/Oct/14 4:26 PM ]

Fixed links per patch and updated CHANGES to reflect that fix!





[JDBC-54] db-transaction does not close connection? Created: 01/May/13  Updated: 01/Jun/16  Resolved: 07/May/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Critical
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

See https://gist.github.com/aphyr/5493964



 Comments   
Comment by Sean Corfield [ 03/May/13 12:39 PM ]

Having reviewed the code, the result of get-connection is never closed, only the result of calling .getConnection on a PreparedStatement and that matches how java.jdbc has always worked, so I don't believe this is a problem. Aphyr mentioned on Twitter that he believes the problem was just due to his test code and not inherent in java.jdbc but we are still waiting for confirmation. Lowering priority for now.

Comment by Sean Corfield [ 07/May/13 8:30 PM ]

I believe I have verified this locally. Under the old API, a connection was only added explicitly at the "top" of a call chain. Under the new API, a connection can be added on-demand in many places which makes the logic much more complex and there are a couple of places where normal usage could indeed cause a connection leak.

Comment by Sean Corfield [ 07/May/13 9:27 PM ]

I believe this is resolved in the latest snapshot (which will become Alpha 4 at some point), as I've gone thru and changed how get-connection is called and ensured that with-open is used whenever a new connection is actually created.

The code needs a fair bit of clean up now tho'. I'm not happy with the result of adding that logic so I need to spend some time refactoring...





[JDBC-20] Create new clean API that accepts connection etc (post 0.1.0) Created: 12/Oct/11  Updated: 01/Jun/16  Resolved: 26/Dec/12

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Create streamlined API that relies on no bindings but instead accepts everything as an argument. Refactor existing API methods to operate in terms of this new API as appropriate.



 Comments   
Comment by Sean Corfield [ 12/Oct/11 4:28 PM ]

See JDBC-3 for some more background.

Comment by Sean Corfield [ 26/Dec/12 3:41 PM ]

The new API has been added. There is also an optional DSL to make it slightly easier to generate SQL that can be used with both the original and the new API.

I will open tickets for follow-up issues.





[JDBC-43] Rewrite old API (with *db*) in terms of new API Created: 26/Dec/12  Updated: 01/Jun/16  Resolved: 07/Apr/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Now that the new API exists, the old API should be rewritten in terms of the new API, passing db and binding it appropriately. Where no new equivalent exists, extend the new API (in particular, the form of insert that just provides rows without column names is not supported in the new API).



 Comments   
Comment by Sean Corfield [ 30/Dec/12 7:51 PM ]

Most of this has been done now. It needs extensive testing!

Comment by Gerrard McNulty [ 18/Feb/13 4:05 PM ]

When are you planning on making a stable release? Are the new APIs going to change at all?

Comment by Sean Corfield [ 19/Feb/13 6:41 PM ]

There are still a number of open tickets that need completing before I can call this a stable release. Folks are welcome to test the SNAPSHOT release on Sonatype and provide feedback.

Comment by Sean Corfield [ 07/Apr/13 2:23 PM ]

0.3.0-alpha1 has been released so this is "done". What's left is polishing and bug fixing.





[JDBC-49] jdbc insert-records can not handle spaces in columnames Created: 21/Mar/13  Updated: 01/Jun/16  Resolved: 06/Apr/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Cees van Kemenade Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: bug
Environment:

Postgres on Debian


Attachments: File jdbcIssue2.clj    

 Description   

If a database contains column-names that include spaces an sql/insert-record will fail. However, inserting via sql/do-commands or sql/insert-rows works as expected.

Attached you find a file showing the error.



 Comments   
Comment by Cees van Kemenade [ 22/Mar/13 1:21 AM ]

Further analysis based on a suggestion of Sean leads to:

It seems to be that there is an issue in the naming-strategy in this issue. Either in the way I define it of the way it is implemented
(sql/with-connection db
(sql/with-naming-strategy {:keyword identity :entity (fn [x] (str \" x \"))}
...
))

When I remove the naming-strategy and replace the offending insert by a manual quoting strategy via
(sql/insert-records "test_jdbc_naming3" {((fn [x] (str \" x \")) "b- b") 7})
which corresponds to
(sql/insert-records "test_jdbc_naming3" {"\"b- b\"" 7})
then the code runs fine.

So the issue is located in the handling of the naming-strategy, or in the way I defined the naming-strategy (for postgres).

Comment by Sean Corfield [ 06/Apr/13 4:14 PM ]

Fixed in 0.3.0-SNAPSHOT by wiring as-str variable into compatibility layer as :entities argument.





[JDBC-65] Using execute! with no params to insert a row fails on SQLite Created: 09/Jul/13  Updated: 01/Jun/16  Resolved: 16/Dec/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

This is due to executeUpdate. Need to reverify the issues around using executeBatch with no params (since that looks like what db-do-commands does - which works - but executeUpdate was used as a workaround for an issue with no params at some point).

}}
user> (def sqlite-db {:subprotocol "sqlite"
                :subname "clojure_test_sqlite"})
#'user/sqlite-db
user> (clojure.java.jdbc/db-do-commands sqlite-db (ddl/create-table :test [:col :int]))
(0)
user> (clojure.java.jdbc/query sqlite-db ["select * from test"])
()
user> (execute! sqlite-db ["insert into test values (?)" 42])
(1)
user> (clojure.java.jdbc/query sqlite-db ["select * from test"])
({:col 42})
user> (execute! sqlite-db ["insert into test values (123)"])
nil
user> (clojure.java.jdbc/query sqlite-db ["select * from test"])
({:col 42})
user> (clojure.java.jdbc/insert! sqlite-db :test {:col 123})
({:last_insert_rowid() 2})
user> (clojure.java.jdbc/query sqlite-db ["select * from test"])
({:col 42} {:col 123})
{{



 Comments   
Comment by Sean Corfield [ 15/Sep/13 4:19 PM ]

Will deal with this during the beta of 0.3.0.

Comment by Sean Corfield [ 16/Dec/13 8:19 PM ]

This turned out to be a bug in how execute! handled empty param group sequences, but only SQLite was affected.





[JDBC-72] Improve laziness with as-arrays? Created: 03/Nov/13  Updated: 01/Jun/16  Resolved: 03/Nov/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 1
Labels: None


 Description   

Currently, query uses vec with as-arrays? so the result set is fully realized before result-set-fn is called. This isn't ideal.

A useful improvement would be to default result-set-fn to vec if as-arrays? else to doall (as currently) and remove the vec call in the conditional argument to result-set-fn.

This would not change the default behavior for as-arrays? true/false but would allow users to override result-set-fn to leverage lazy result set fetching even when using as-arrays? true.



 Comments   
Comment by Sean Corfield [ 03/Nov/13 9:39 PM ]

Will be in alpha6/beta1





[JDBC-84] Some JDBC drivers return a Boolean that is constructed and not identical to Clojure's true/false Created: 16/Dec/13  Updated: 01/Jun/16  Resolved: 16/Dec/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

After some experimentation, I've discovered that certain JDBC drivers return the equivalent of (Boolean. true) and (Boolean. false) which are truthy/falsey but sometimes produce surprising results since they are not actually true and false. Extending the result set reading protocol to Boolean to turn them into canonical values via if seems to do the trick.



 Comments   
Comment by Sean Corfield [ 16/Dec/13 2:25 PM ]

A simple (if v true false) is not sufficient for this - it needs to be (if (= true v) true false) based on tests I've conducted. The problem is that the constructed (Boolean. false) value behaves as truthy.

Comment by Sean Corfield [ 16/Dec/13 3:13 PM ]

Added Boolean implementation to IResultSetReadColumn.





[JDBC-107] metadata-result leaks ResultSets Created: 12/Mar/15  Updated: 01/Jun/16  Resolved: 26/Aug/15

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Niels van Klaveren Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

When working with Oracle DatabaseMetaData, I noticed out of cursors exceptions (ORA-01000) when iterating over tables and requesting their .getIndexInfo and .getImportedKeys information. These errors tend to point to leaked resultsets. Investigating the metadata-result code I noticed this does happen, and the result set are just leaked silently on the other databases I tend to use.

A fix would include a default result-set-fn doall in the :or keys clause, and binding the true clause of the (if (instance? java.sql.ResultSet rs-or-value) to a new var, .close the rs-or-value, and return the new var.



 Comments   
Comment by Sean Corfield [ 12/Mar/15 11:57 AM ]

Thanks Niels. I'll take a look at that shortly. I may have some time next week.

Comment by Sean Corfield [ 18/May/15 11:38 AM ]

I finally got around to looking at this – apologies for the delay – and I'm not convinced metadata-result should close the ResultSet since that is passed in by the user. java.jdbc itself only ever calls metadata-result in a test so I updated that with the way I think it should be handled, by the user:

(sql/with-db-metadata [metadata db-spec]
  (with-open [result (.getTables metadata ...)]
    (let [table-info (sql/metadata-result result)]
      ... do stuff with table-info ...)))

The point here is that the user creates the ResultSet, not java.jdbc, so the user should close it correctly.

Comment by Niels van Klaveren [ 30/Jul/15 8:47 AM ]

The similarities between metadata-result and query (both having row-fn and result-set-fn) made me suppose abstracting away the connection handling of the result-set was the intention. It seems that assumption was a bit too hasty.

However, currently metadata-result IMO straddles uneasily between an implementation that just returns a result-set-seq or value, and one that can do all processing intended for abstracting away connection opening and closing, without actually doing so.

This does opens the way for a quite simple workaround that does though, a simple wrapping macro

(defmacro metadata-query
  [query & params]
  `(with-open [result# ~query]
     (j/metadata-result result# ~@params)))

which takes a java metadata call and the result set processing parameters for metadata-result, and processes the results in a with-open block according to the parameters.

(j/with-db-metadata [meta db-spec]
                    (metadata-query (.getTables meta nil nil nil (into-array String ["TABLE"]))
                                    :result-set-fn (fn [rs] (->> rs
                                                                 (filter #(= (:user db-spec) (:table_schem %)))
                                                                 (map :table_name)
                                                                 doall))))
Comment by Sean Corfield [ 30/Jul/15 10:59 AM ]

A new actionable suggestion has been made.

Comment by Sean Corfield [ 30/Jul/15 11:00 AM ]

That seems like a reasonable addition which would clarify the usage. Thank you.

Comment by Sean Corfield [ 26/Aug/15 9:25 PM ]

Will be in 0.4.2.





[JDBC-118] Remove all deprecated functionality in 0.6.0 Created: 09/Apr/16  Updated: 01/Jun/16  Resolved: 13/Apr/16

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

See JDBC-117 for the bridging release that introduced the replacement syntax and deprecated the older syntax.

In addition, remove the deprecated namespace and tests, as well as db-transaction which was deprecated back in 0.3.0.



 Comments   
Comment by Sean Corfield [ 12/Apr/16 11:39 PM ]

This is all implemented on the 0.6.0 branch.

Comment by Sean Corfield [ 13/Apr/16 11:48 AM ]

Released in 0.6.0-alpha1





[JDBC-120] Clean up syntax for create table DDL to improve options handling Created: 09/Apr/16  Updated: 01/Jun/16  Resolved: 09/Apr/16

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Version 0.5.5 introduced a single options map, flagged by the :options argument. This is a compromise while I figure out the better way to change create-table-ddl to be a better behaved function, instead of being inherently variadic.



 Comments   
Comment by Sean Corfield [ 09/Apr/16 5:03 PM ]

Since the arguments are really:

  • table name
  • list of column specs
  • options

It probably makes sense to have the signature:

(create-table-ddl :table [[col1 spec] [col2 spec] ..] {:option value})

so it takes either three or four arguments: a name (string / symbol / keyword), a collection of column specs (require the column specs to be vectors), an optional map of options.

Given that the current form is variadic but the "middle" arguments are not maps anyway, introducing :options in 0.5.5 was unnecessary, but consistent with the changes for insert!.

Version 0.5.6 should support both forms, then 0.6.0 can drop support for the general variadic version.

Comment by Sean Corfield [ 09/Apr/16 9:12 PM ]

This has been implemented ready for 0.5.6.





[JDBC-5] Adding stropping and qualified identifiers to java.jdbc Created: 24/Apr/11  Updated: 01/Jun/16  Resolved: 09/Jul/11

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Nicolas Buduroi Assignee: Nicolas Buduroi
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File 0001-Added-the-with-stropping-macro-and-the-as-identifier.patch    
Patch: Code
Waiting On: Sean Corfield

 Description   

Following the following discussion:

https://groups.google.com/d/topic/clojure/j7fXdkFMrec/discussion

I've implemented a simple stropping feature that also provide a way to supply qualified identifiers and a function (as-identifier) that can be reused in user code. The implementation can be found there:

https://github.com/budu/java.jdbc/commit/0822d323cd4d267533a88241be68cb184ff9bf28

It's also attached to this issue.

Here's some example of how that feature could be used:

user> (create-table :foo [:bar :integer] [:baz :text])
"CREATE TABLE foo (bar integer, baz text)"
user> (create-table [:schema :foo] [:bar :integer] [:baz :text])
"CREATE TABLE schema.foo (bar integer, baz text)"
user> (with-stropping \" identity (create-table [:schema :foo] [:bar :integer] [:baz :text]))
"CREATE TABLE \"schema\".\"foo\" (\"bar\" integer, \"baz\" text)"
user> (with-stropping [[ ]] identity (create-table [:schema :foo] [:bar :integer] [:baz :text]))
"CREATE TABLE [schema].[foo] ([bar] integer, [baz] text)"



 Comments   
Comment by Sean Corfield [ 24/Apr/11 8:21 PM ]

Duplicate of https://github.com/clojure/java.jdbc/issues/7 - thanx for the patch!

Comment by Sean Corfield [ 09/Jul/11 2:28 PM ]

with-naming-strategy, with-quoted-identifiers, as-quoted-str address this.





[JDBC-1] Provide option to return SQL generated / execution stats Created: 08/May/11  Updated: 01/Jun/16  Resolved: 26/Aug/15

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None


 Description   

Shantanu: Provide a mechanism to show the SQL being executed (configurable, so that it can be turned off)

Sean: Good idea. Even better, a way to access statistics about the prepared statement after execution - timing etc?

Shantanu: Yes, that would be an add-on value to show how are the queries performing.



 Comments   
Comment by Sean Corfield [ 15/Sep/13 4:21 PM ]

Post 0.3.0

Comment by Sean Corfield [ 16/Jan/15 1:17 PM ]

Sounds like not all drivers can support this but PreparedStatement.toString() returns the generated SQL for some databases: http://stackoverflow.com/questions/2683214/get-query-from-java-sql-preparedstatement

Comment by Sean Corfield [ 26/Aug/15 9:02 PM ]

Since there is no portable way to do this and the output of (.toString stmt) differs across drivers such that extracting the generated SQL / parameters in any meaningful way is difficult, I'm going to decline this one.





[JDBC-21] Accept string URL as connection parameters Created: 17/Oct/11  Updated: 01/Jun/16  Resolved: 02/Nov/11

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Phil Hagelberg Assignee: Phil Hagelberg
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File 0001-Accept-string-or-URI-in-with-connection.patch     Text File jdbc-21-take2.patch    
Patch: Code and Test

 Description   

Should support connection parameters in this style:

(with-connection "postgres://user:password@host.com:port/db-name" [...])



 Comments   
Comment by Phil Hagelberg [ 17/Oct/11 1:18 PM ]

Talked this over with Sean on IRC; I'll put together a patch.

Comment by Phil Hagelberg [ 17/Oct/11 4:21 PM ]

Implements said functionality. Patch depends upon supplied patch for JDBC-22.

Comment by Phil Hagelberg [ 17/Oct/11 4:54 PM ]

Hold off on this patch; apparently the scheme and subprotocol don't always match up. Will provide an updated patch.

Comment by Phil Hagelberg [ 17/Oct/11 6:00 PM ]

This one should take care of the subprotocol issue; it also adds support for running the tests against postgres.

Comment by Sean Corfield [ 02/Nov/11 2:52 PM ]

Patches applied - thank you!





[JDBC-24] Make where-params optional for update-values function Created: 04/Jan/12  Updated: 01/Jun/16  Resolved: 04/Jan/12

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Trent Ogren Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None

Attachments: Text File optional-where-params-update-values.patch    
Patch: Code and Test

 Description   

This prevents having to pass a wonky always-true clause (e.g. ["1=1"]) as the where-params when updating all the rows in a table.



 Comments   
Comment by Trent Ogren [ 04/Jan/12 11:04 PM ]

I'm aware that I might have to sign the Clojure CA. I just thought I'd see what you think first...

Comment by Sean Corfield [ 04/Jan/12 11:43 PM ]

See JDBC-14 since I'm declining this for the same reason. Performing a global update is an unusual operation and I think the API should prevent you from accidentally updating all rows, just as it should prevent you from deleting all rows. Requiring a where clause does that.





[JDBC-33] update-or-insert does not work with oracle Created: 21/May/12  Updated: 01/Jun/16  Resolved: 10/Jun/12

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Minor
Reporter: Brian Tatnall Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: bug
Environment:

clojure 1.3.0
clojure.java.jdbc 0.2.1
oracle 10.2.0.4.0



 Description   

According to the oracle documentation

For a prepared statement batch, it is not possible to know the number of rows affected in the database by each individual statement in the batch. Therefore, all array elements have a value of -2. According to the JDBC 2.0 specification, a value of -2 indicates that the operation was successful but the number of rows affected is unknown.

As documented update-values returns (-2) which means that the (if (zero?)) check in update-or-insert-values will never return true.

Oracle does provide the number of rows updated via an the method getUpdateCount on instances of Statement.

The following change fixes this problem for oracle users, but I am not sure how it will affect other drivers.

(defn oracle-do-prepared
  "Executes an (optionally parameterized) SQL prepared statement on the                                           
  open database connection. Each param-group is a seq of values for all of                                        
  the parameters.                                                                                                 
  Return a seq of update counts (one count for each param-group)."
  [sql & param-groups]
  (with-open [^PreparedStatement stmt (prepare-statement (connection) sql)]
    (if (empty? param-groups)
      (transaction* (ƒ [] (vector (.executeUpdate stmt))))
      (do
        (doseq [param-group param-groups]
          (set-parameters stmt param-group)
          (.addBatch stmt))
        (transaction* (ƒ [] (let [result (.executeBatch stmt)
                                  count  (.getUpdateCount stmt)]
                              (vector count))))))))


 Comments   
Comment by Sean Corfield [ 22/May/12 12:17 AM ]

Interesting. (.getUpdateCount stmt) is a standard part of the JDBC API so it looks reasonable to see if the result of the (.executeBatch stmt) is a single sequence with a -2 value, the code could call (.getUpdateCount stmt) and return that (single) value as a sequence. It looks like that should be completely portable but it will be hard to test since none of the other DBs return -2 as far as I know and I don't have an Oracle install to test against





[JDBC-56] Problem with find-connection when using agents with c3p0 connection pool Created: 16/May/13  Updated: 01/Jun/16  Resolved: 15/Sep/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Minor
Reporter: Ed O'Loughlin Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None
Environment:

OS X 10.8.3
Java 1.6.0_45
Clojure 1.5
c3p0 0.9.1.2
java.jdbc 0.2.1 (the problem also seems to be there in HEAD at the time of logging this)



 Description   

I use an agent to offload email processing and update a db on completion. If I use a c3p0 connection pool, I get an exception and a complaint that the connection isn't open (see stacktrace). The code works if I wrap the agent fn in a clojure.java.jdbc/with-connection and get a connection from the pool in the agent thread.

The in jdbc.clj, the find-connection (db-find-connection in HEAD) checks for an existing connection and tries to use it without checking if it's open. This causes an exception with a c3p0 connection that was acquired in another thread.

Here's the stacktrace I'm seeing (note the "You can't operate on a closed Connection!!!"):

2013-05-16 10:32:22,208 [clojure-agent-send-off-pool-7] DEBUG com.mchange.v2.sql.SqlUtils - Converting Throwable to SQLException...
java.lang.NullPointerException
	at com.mchange.v2.c3p0.impl.NewProxyConnection.prepareStatement(NewProxyConnection.java:186)
	at clojure.java.jdbc$prepare_statement.doInvoke(jdbc.clj:450)
	at clojure.lang.RestFn.invoke(RestFn.java:425)
	at clojure.lang.AFn.applyToHelper(AFn.java:163)
	at clojure.lang.RestFn.applyTo(RestFn.java:132)
	at clojure.core$apply.invoke(core.clj:621)
	at clojure.java.jdbc$with_query_results_STAR_.invoke(jdbc.clj:646)
	at clj_record.core$find_by_sql$func__3362__auto____3378.invoke(core.clj:72)
	at clj_record.core$find_by_sql.invoke(core.clj:71)
	at clj_record.core$find_records.invoke(core.clj:85)
	at clj_record.core$find_record.invoke(core.clj:91)
	at neataudio.domain.interview_participant_relationship$find_record.invoke(interview_participant_relationship.clj:18)
	at neataudio.domain.interview_participant_relationship$update_email_status.invoke(interview_participant_relationship.clj:37)
	at neataudio.controllers.participant_controller$notify$fn__5113$fn__5114$func__3362__auto____5115$fn__5116.invoke(participant_controller.clj:104)
	at clojure.java.jdbc$transaction_STAR_$fn__3162.invoke(jdbc.clj:372)
	at clojure.java.jdbc$transaction_STAR_.invoke(jdbc.clj:371)
	at neataudio.controllers.participant_controller$notify$fn__5113$fn__5114$func__3362__auto____5115.invoke(participant_controller.clj:101)
	at neataudio.controllers.participant_controller$notify$fn__5113$fn__5114.invoke(participant_controller.clj:101)
	at neataudio.controllers.participant_controller$notify$fn__5113.invoke(participant_controller.clj:101)
	at clojure.lang.AFn.applyToHelper(AFn.java:185)
	at clojure.lang.AFn.applyTo(AFn.java:151)
	at clojure.core$apply.invoke(core.clj:623)
	at clojure.core$binding_conveyor_fn$fn__4115.doInvoke(core.clj:1848)
	at clojure.lang.RestFn.applyTo(RestFn.java:146)
	at clojure.lang.Agent$Action.doRun(Agent.java:114)
	at clojure.lang.Agent$Action.run(Agent.java:163)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
	at java.lang.Thread.run(Thread.java:680)
java.sql.SQLException: You can't operate on a closed Connection!!!
	at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:106)
	at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:65)
	at com.mchange.v2.c3p0.impl.NewProxyConnection.prepareStatement(NewProxyConnection.java:222)
	at clojure.java.jdbc$prepare_statement.doInvoke(jdbc.clj:450)
	at clojure.lang.RestFn.invoke(RestFn.java:425)
	at clojure.lang.AFn.applyToHelper(AFn.java:163)
	at clojure.lang.RestFn.applyTo(RestFn.java:132)
	at clojure.core$apply.invoke(core.clj:621)
	at clojure.java.jdbc$with_query_results_STAR_.invoke(jdbc.clj:646)
	at clj_record.core$find_by_sql$func__3362__auto____3378.invoke(core.clj:72)
	at clj_record.core$find_by_sql.invoke(core.clj:71)
	at clj_record.core$find_records.invoke(core.clj:85)
	at clj_record.core$find_record.invoke(core.clj:91)
	at neataudio.domain.interview_participant_relationship$find_record.invoke(interview_participant_relationship.clj:18)
	at neataudio.domain.interview_participant_relationship$update_email_status.invoke(interview_participant_relationship.clj:37)
	at neataudio.controllers.participant_controller$notify$fn__5113$fn__5114$func__3362__auto____5115$fn__5116.invoke(participant_controller.clj:104)
	at clojure.java.jdbc$transaction_STAR_$fn__3162.invoke(jdbc.clj:372)
	at clojure.java.jdbc$transaction_STAR_.invoke(jdbc.clj:371)
	at neataudio.controllers.participant_controller$notify$fn__5113$fn__5114$func__3362__auto____5115.invoke(participant_controller.clj:101)
	at neataudio.controllers.participant_controller$notify$fn__5113$fn__5114.invoke(participant_controller.clj:101)
	at neataudio.controllers.participant_controller$notify$fn__5113.invoke(participant_controller.clj:101)
	at clojure.lang.AFn.applyToHelper(AFn.java:185)
	at clojure.lang.AFn.applyTo(AFn.java:151)
	at clojure.core$apply.invoke(core.clj:623)
	at clojure.core$binding_conveyor_fn$fn__4115.doInvoke(core.clj:1848)
	at clojure.lang.RestFn.applyTo(RestFn.java:146)
	at clojure.lang.Agent$Action.doRun(Agent.java:114)
	at clojure.lang.Agent$Action.run(Agent.java:163)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
	at java.lang.Thread.run(Thread.java:680)
Caused by: java.lang.NullPointerException
	at com.mchange.v2.c3p0.impl.NewProxyConnection.prepareStatement(NewProxyConnection.java:186)
	... 28 more


 Comments   
Comment by Ed O'Loughlin [ 16/May/13 4:50 AM ]

Apologies: this is a crappy bug report. I'm under a bit of pressure at the moment so I can't make a test case.

Comment by Sean Corfield [ 21/May/13 12:36 PM ]

If you use the new API, you pass the db-spec into the functions and it will take care of getting the connection, using it, and closing it - and that will all happen on the same (agent) thread. I would not expect you to be able to get a connection in one thread and use it from another thread.

When you get a chance, please post an example of both the working and non-working code, but my initial thinking is this isn't a bug, just an inherent limitation of working with connections across threads....?

Comment by Sean Corfield [ 15/Jun/13 9:04 PM ]

Ed, are you going to get a chance to post an example of working / non-working code per my request a month ago?

I'll leave this open until the end of June but without further information, I'm going to assert it works if used as designed and close this as not a bug.

Comment by Sean Corfield [ 15/Sep/13 4:17 PM ]

In the absence of better information, I'm closing this "works as designed" and caution that people need to be more careful when using connections across multiple threads.





[JDBC-123] resolve insert-milti! typo in jdbc.clj Created: 16/Apr/16  Updated: 01/Jun/16  Resolved: 16/Apr/16

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Trivial
Reporter: Stuart Hinson Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File JDBC-123.patch    
Patch: Code

 Description   

appears on https://github.com/clojure/java.jdbc/blob/cae71ba9c145f205564c589b538607fc55cc20ae/src/main/clojure/clojure/java/jdbc.clj#L1042



 Comments   
Comment by Sean Corfield [ 16/Apr/16 6:31 PM ]

Good catch, thank you!

Comment by Sean Corfield [ 16/Apr/16 6:35 PM ]

Will be in next prerelease build (maybe beta1).





[JDBC-2] create-table should allow table-spec Created: 08/May/11  Updated: 01/Jun/16  Resolved: 09/Jul/11

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

create-table takes a name and a seq of column-specs but does not allow any table-spec to follow that so you cannot, for example, the engine type / charset / etc.



 Comments   
Comment by Sean Corfield [ 09/Jul/11 2:18 PM ]

create-table now allows optional :table-spec "table level attributes" at the end of the specs.





[JDBC-12] exceptions when trying to insert into apache derby Created: 24/Jul/11  Updated: 01/Jun/16  Resolved: 25/Jul/11

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Kevin Downey Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None

Waiting On: Sean Corfield

 Description   

java.sql.SQLException: Only executeBatch and clearBatch allowed in the middle of a batch.
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.newSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.ConnectionChild.newSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedStatement.checkIfInMiddleOfBatch(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeStatement(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeUpdate(Unknown Source)
at clojure.java.jdbc.internal$do_prepared_return_keys_STAR_$fn__84.invoke(internal.clj:272)
at clojure.java.jdbc.internal$transaction_STAR_.invoke(internal.clj:185)
at clojure.java.jdbc.internal$do_prepared_return_keys_STAR_.doInvoke(internal.clj:272)
at clojure.lang.RestFn.applyTo(RestFn.java:140)
at clojure.core$apply.invoke(core.clj:542)
at clojure.java.jdbc$insert_values.doInvoke(jdbc.clj:259)
at clojure.lang.RestFn.invoke(RestFn.java:443)
at hiredman.triples$store_triple$fn_471$fn_472.invoke(triples.clj:39)
at clojure.java.jdbc.internal$transaction_STAR_.invoke(internal.clj:174)
at hiredman.triples$store_triple$fn__471.invoke(triples.clj:39)
at clojure.java.jdbc.internal$with_connection_STAR_.invoke(internal.clj:156)
at hiredman.triples$store_triple.invoke(triples.clj:39)
at hiredman.clojurebot.factoids$fn__2689.invoke(factoids.clj:139)
at clojure.lang.MultiFn.invoke(MultiFn.java:167)
at hiredman.clojurebot.factoids$factoid_command_run.invoke(factoids.clj:173)
at conduit.core$fn_284$a_arr285$a_arr_286.invoke(core.clj:233)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$a_except$a_except__307.invoke(core.clj:350)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:42)
at clojurebot.conduit$a_indirect$this_fn__500.invoke(conduit.clj:9)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.irc$reply_fn$irc_reply_fn__350.invoke(irc.clj:22)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$select_fn.invoke(core.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.irc$irc_run$handle_msg__421.invoke(irc.clj:166)
at clojure.lang.AFn.applyToHelper(AFn.java:165)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:542)
at clojure.core$partial$fn__3678.doInvoke(core.clj:2009)
at clojure.lang.RestFn.invoke(RestFn.java:409)
at conduit.core$comp_fn$a_comp__193.invoke(core.clj:45)
at conduit.core$a_run.invoke(core.clj:28)
at conduit.irc$irc_run$run__424.invoke(irc.clj:175)
at conduit.irc$irc_run.doInvoke(irc.clj:179)
at clojure.lang.RestFn.applyTo(RestFn.java:140)
at clojure.core$apply.invoke(core.clj:542)
at clojurebot.core$main$fn_4654.invoke(core.clj:214)
at clojure.lang.AFn.call(AFn.java:18)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.sql.SQLException: Only executeBatch and clearBatch allowed in the middle of a batch.
at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown Source)
... 185 more

replacing .executeUpdate with .executeBatch in do-prepared-return-keys* seems to fix the problem and be what was meant in the first place, the result is bound to 'counts' and .executeUpdate returns a single count while .executeBatch returns multiple



 Comments   
Comment by Sean Corfield [ 24/Jul/11 3:38 PM ]

Unfortunately .executeBatch had to be changed to .executeUpdate in order to solve JDBC-10:

https://github.com/clojure/java.jdbc/commit/9240d66c9124454187416861a0e2021952eb4f40

MS SQL Server and PostgreSQL both fail on .executeUpdate in that call position.

Clearly we need as many of these DBs up and running for automated testing as possible before I can attempt to fix this so that I don't accidentally break something else.

Comment by Sean Corfield [ 24/Jul/11 5:26 PM ]

I now have some unit tests for MySQL, HSQLDB and Derby. insert-records fails with the exception reported here, only for Derby. I'll complete the suite of tests, try to get PostgreSQL and MS SQL Server tests running as well and then I'll be in a good position to fix things...

Comment by Sean Corfield [ 25/Jul/11 4:52 PM ]

The culprit was .addBatch which has now been removed and do-prepared-return-keys now explicitly only has one param-group argument.





[JDBC-31] distinct? throws clojure.lang.ArityException, when applied with no arguments Created: 12/May/12  Updated: 01/Jun/16  Resolved: 10/Jun/12

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Jürgen Hötzel Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: patch,, performance

Attachments: Text File 0001-add-make-cols-unique-test-case-empty-cols.patch     Text File 0002-distinct-throws-clojure.lang.ArityException-when-app.patch    
Patch: Code and Test

 Description   

HSQLDB returns an empty ResultSet when using (.getGeneratedKeys stmt)
and no keys are generated. So this Exception is thrown for each record
without generated keys.

While this Exception is caught in do-prepared-return-keys, this can lead to a huge overhead caused by the JVM exception handling. I did a performance test.

Before Patch:

clojure.java.test-jdbc> (time (sql/with-connection hsqldb-db (count (apply sql/insert-records :dummy  (map #(hash-map :name (str %) :id %) (range 10000))))))
"Elapsed time: 3429.346743 msecs"
10000

After Patch:

 clojure.java.test-jdbc> (time (sql/with-connection hsqldb-db (count (apply sql/insert-records :dummy  (map #(hash-map :name (str %) :id %) (range 10000))))))
"Elapsed time: 1397.444753 msecs"


 Comments   
Comment by Sean Corfield [ 10/Jun/12 5:24 PM ]

Thanx for the patch!





[JDBC-55] More reflection warnings to investigate Created: 14/May/13  Updated: 01/Jun/16  Resolved: 21/May/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File jdbc-55-fix-reflection-warnings-v1.txt    

 Description   

Reflection warning, clojure/java/jdbc.clj:618:50 - reference to field executeQuery can't be resolved.
Reflection warning, clojure/java/jdbc.clj:618:33 - reference to field close can't be resolved.



 Comments   
Comment by Andy Fingerhut [ 15/May/13 10:34 AM ]

Patch jdbc-55-fix-reflection-warnings-v1.txt dated May 15 2013 adds one more type hint that eliminates the two mentioned instances of reflection.

Comment by Sean Corfield [ 21/May/13 10:08 AM ]

Patch applied. Thank you!





[JDBC-96] execute! should accept a PreparedStatement Created: 13/May/14  Updated: 01/Jun/16  Resolved: 18/May/14

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Tejas Dinkar Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: enhancement, patch

Attachments: Text File 0001-execute-can-now-accept-prepared-statements-as-well.patch    
Patch: Code and Test

 Description   

This patch to clojure.java.jdbc allows you to pass in a prepared statement to the execute function.

I have already signed the CLA (Tejas Dinkar).

I can also open a pull request from here: https://github.com/gja/java.jdbc/tree/execute-should-accept-a-prepared-statement



 Comments   
Comment by Sean Corfield [ 14/May/14 12:26 AM ]

This looks very nice Tejas, thank you! I'll apply the patch at the weekend and verify everything and let you know if I hit any issues.

Comment by Sean Corfield [ 18/May/14 7:11 PM ]

Patch applied. Will be in 0.3.4 version of library.





[JDBC-97] Support for setting options such as setTimeout on the PreparedStatement Created: 11/Jun/14  Updated: 01/Jun/16  Resolved: 11/Jun/14

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: David Powell Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None


 Description   

It is currently rather difficult to set options such as the stmt.setTimeout() as the creation of the PreparedStatement is encapsulated inside db-query-with-resultset. Some sort of option or hook for this would be cool.



 Comments   
Comment by Sean Corfield [ 11/Jun/14 12:11 PM ]

You can pass a PreparedStatement instead of a SQL string - so you can build your PreparedStatement - using prepare-statement or manually - and then pass it into query as the first element of the vector that would normally contain the SQL string plus parameters.

Comment by Sean Corfield [ 11/Jun/14 12:11 PM ]

Already possible with existing API.

Comment by David Powell [ 11/Jun/14 12:35 PM ]

Ah cool - I was trying to do this with execute!, but I see that that won't be possible until release 0.3.4.





[JDBC-100] Allow setting the timeout on a prepared statement Created: 04/Sep/14  Updated: 01/Jun/16  Resolved: 28/Oct/14

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Gary Fredericks Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

This could be done via a new :timeout option to prepare-statement, or a generic option somewhere that allows manipulating the prepared statement prior to making a query.



 Comments   
Comment by Sean Corfield [ 28/Oct/14 5:43 PM ]

Given that both query and execute! support passing a PreparedStatement in place of the SQL string, this is already possible. You can create a PreparedStatement with the prepare-statement function and then set a timeout or whatever you want, then pass it into query or execute!.

That said, adding support for it directly on prepare-statement seems like a reasonable enhancement since that's where all the other fancy PreparedStatement stuff is done.

Comment by Sean Corfield [ 28/Oct/14 5:59 PM ]

Added :timeout to prepare-statement. Will be in 0.3.6.





[JDBC-121] (insert! db table [cols] [vals] {opts}) does not function correctly Created: 10/Apr/16  Updated: 01/Jun/16  Resolved: 10/Apr/16

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

The [vals] argument needs an extra set of [ ] around it to function correctly.



 Comments   
Comment by Sean Corfield [ 10/Apr/16 2:02 PM ]

Fixed in 0.5.7 and 0.6.0.





[JDBC-17] Type-hint public fns returning Connection Created: 10/Oct/11  Updated: 01/Jun/16  Resolved: 10/Oct/11

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Stuart Sierra Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File JDBC-17-a.patch    
Patch: Code

 Description   

clojure.java.jdbc/connection and find-connection both return a java.sql.Connection but are not type-hinted.

There is a type hint on clojure.java.jdbc.internal/connection*, but it does not carry through to the public Var.

Tagging these functions eliminates 9 reflection warnings in a project I have.



 Comments   
Comment by Sean Corfield [ 10/Oct/11 11:21 PM ]

Patch applied. Thanx Stuart!





[JDBC-18] Consider ["name = ? and age = ?" nil 42] Created: 11/Oct/11  Updated: 01/Jun/16  Resolved: 12/Aug/12

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Duplicate Votes: 0
Labels: None


 Description   

The query is "name = nil and age = 42" where the intent is probably "name is null and age = 42".

It's not clear whether this is really the library's responsibility (to rewrite the SQL) or whether this should be handled by another layer above so this ticket is a placeholder for discussion.



 Comments   
Comment by Julian Birch [ 13/Oct/11 1:43 PM ]

I've been thinking about this a bit more, and I'm thinking that a function "match-clause" might deal with this in a fairly composable way. So

(match-clause { :name nil :age 42 })

would return

["Name is null and Age = ?" 42]

It's not exactly ClojureQL, but it's non-invasive and makes query-by-example easy to achieve.

Comment by Sean Corfield [ 13/Oct/11 2:28 PM ]

That's nice idea. I actually do something similar in the CRUD wrapper I built around java.jdbc at World Singles so that might be a nice addition to the library.

Comment by Sean Corfield [ 12/Aug/12 9:39 PM ]

This will be incorporated into the new API I'm adding in JDBC-20 so you can say (where {:name nil :age 42}) and it will return the correct SQL where clause. Accordingly, I'm closing this as a duplicate.

Comment by Sean Corfield [ 12/Aug/12 9:39 PM ]

A subset of what will be implemented as part of JDBC-20





[JDBC-88] Update :arglists metadata so it no longer confuses Eastwood Created: 13/Jan/14  Updated: 01/Jun/16  Resolved: 26/Aug/15

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Minor
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None


 Description   

:arglists metadata should be valid Clojure syntax. Examples should be moved into docstrings.



 Comments   
Comment by Andy Fingerhut [ 13/Jan/14 4:02 PM ]

Eastwood's use of :arglists for :wrong-arity linter checking is definitely 'at odds' with the way java.jdbc and some other libraries use it for clearer documentation.

I would recommend leaving :arglists as-is for java.jdbc, and wait for the next version of Eastwood to add a capability to have something like an :eastwood-arglists for functions that explicitly replace their :arglists like this. Such a capability might best belong in Eastwood itself, since it is the new tool on the block.

As an example, I was considering that if a fn/macro explicitly chose to be Eastwood-friendly in this way, it could have an :arglists and :eastwood-arglists explicitly, for those few vars that replace their :arglists. When one is using Eastwood with a library that they do not want to modify in that way, there could be some extra config file for Eastwood where one could specify these :eastwood-arglists.

Sound reasonable?

Comment by Sean Corfield [ 28/Oct/14 6:47 PM ]

Given Andy's comment back in January, lowering priority to Minor.

Comment by Sean Corfield [ 16/Jan/15 1:14 PM ]

Eastwood has recently added the ability to provide configuration to override specific function :arglists and out of the box comes with java.jdbc config to override the problematic functions: https://github.com/jonase/eastwood/blob/master/resource/eastwood/config/clojure-contrib.clj#L38-64

That means, as far as Eastwood is concerned, this is a non-issue, but it would be nice to fix those functions' metadata anyway.

Comment by Sean Corfield [ 26/Aug/15 10:31 PM ]

Given the approach taken by Eastwood and the complexity of some of java.jdbc's argument lists, I have decided to leave the metadata as-is since it helps with CIDER and other tools enough to warrant the not-quite-valid structure.





[JDBC-8] Review (and fix) reflection warnings Created: 08/Jun/11  Updated: 01/Jun/16  Resolved: 17/Jul/11

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

There are quite a few reflection warnings in the c.j.j code which should be reviewed and performance critical pieces type hinted.



 Comments   
Comment by Allen Rohner [ 09/Jul/11 3:50 PM ]

Empirically, I can tell you the one in do-prepared*, (.prepareStatement (connection), and the one in transaction* (let [con (connection*) ...) are both performance critical for me.

Comment by Sean Corfield [ 09/Jul/11 4:00 PM ]

Thanx Allen. The impression I get from clojure-dev is new contrib libraries should have zero reflection warnings so I plan to find and fix all such cases in c.j.j.

Comment by Sean Corfield [ 17/Jul/11 5:47 PM ]

All reflection warnings have been removed.





[JDBC-15] resultset-seq uses structmap which has been deprecated Created: 13/Aug/11  Updated: 01/Jun/16  Resolved: 16/Oct/11

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

structmap has been deprecated. The open question is whether to replace its use with record or regular map (I'd expect the latter would be less surprising?).



 Comments   
Comment by Sean Corfield [ 12/Oct/11 4:29 PM ]

See JDBC-3 for some background on this.

Comment by Sean Corfield [ 16/Oct/11 3:41 PM ]

The way resultset-seq uses create-struct and struct doesn't seem to leverage any inherent functionality of structmaps except perhaps some vague guarantee of ordering of columns in rows. Replacing the create-struct / struct calls with

(into {} (map vector keys (row-values)))

maintains the order up to 16 columns (PersistentArrayMap) so that seems a reasonable compromise.

I'm not sure that preserving order is something that should be documented or supported?

Comment by Sean Corfield [ 16/Oct/11 4:32 PM ]

No longer uses structmap. Attempts to provide column order (up to 16 columns) but don't rely on that... it was never documented as a guarantee!





[JDBC-16] do-prepared doesn't throw on invalid SQL Created: 07/Sep/11  Updated: 01/Jun/16  Resolved: 11/Oct/11

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Allen Rohner Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   
(jdbc/do-prepared "syntactically invalid SQL")

returns nil, while

(jdbc/do-commands "syntactically invalid SQL")
throws as expected.



 Comments   
Comment by Allen Rohner [ 07/Sep/11 4:00 PM ]

Actually, do-prepared doesn't seem to do anything at all. (jdbc/do-commands "insert into foo (foo,bar) VALUES (1,2)") works as expected, same command with do-prepared doesn't do anything.

Comment by Sean Corfield [ 11/Oct/11 2:12 PM ]

Ensure .addBatch is called even when there are no param-groups.





[JDBC-23] Prepared statements' parameters don't get applied Created: 21/Nov/11  Updated: 01/Jun/16  Resolved: 22/Nov/11

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Ghadi Shayban Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None
Environment:

Oracle JDBC, probably applies universally


Attachments: Text File param.patch    
Patch: Code and Test

 Description   

Basically a goof line in the code.
If you pass a PreparedStatement to with-query-results, it chops off too many items from the param list, whereas if you pass in straight SQL, it's ok.

Deleted the offending line



 Comments   
Comment by Sean Corfield [ 22/Nov/11 1:01 AM ]

Thanx Ghadi.





[JDBC-40] insert nulls using .setNull (e.g. for Teradata) Created: 12/Oct/12  Updated: 01/Jun/16  Resolved: 03/Nov/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Felix Andrews Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None
Environment:

Teradata 14


Attachments: Text File JDBC-40.patch    
Patch: Code

 Description   

Hi, I am using this nice package with a large (government) Teradata machine. I found that I can not insert nil values using insert-rows: the Teradata JDBC driver tells me to use .setNull or the form of .setObject including an SQL type code.

The following replacement definition of set-parameters was the only change needed to get this to work.

(defn- set-parameters
  "Add the parameters to the given statement."
  [^PreparedStatement stmt params]
  (let [pmd (.getParameterMetaData stmt)]
    (dorun
     (map-indexed
      (fn [ix value]
        (let [typ (.getParameterType pmd (inc ix))]
          (if (nil? value)
            (.setNull stmt (inc ix) typ)
            (.setObject stmt (inc ix) value typ))))
      params))))

I am not sure whether the use of .getParameterMetaData is a generally appropriate because the docs http://docs.oracle.com/javase/7/docs/api/java/sql/ParameterMetaData.html say:

For some queries and driver implementations, the data that would be returned by a ParameterMetaData object may not be available until the PreparedStatement has been executed.

Anyway it does work for Teradata. Maybe it could be an option?

It just occurred to me that it is probably inefficient to retrieve the parameter metadata on every call to set-parameters. It could be done in insert-rows instead. On the other hand, I didn't actually notice any performance hit when inserting 40,000 rows.



 Comments   
Comment by Sean Corfield [ 07/Apr/13 1:07 AM ]

This was a tricky one. Both .getParameterMetadataData and .getParameterType can throw exceptions. On MS SQL Server, if there is a syntax error in the SQL, the former throws an Exception. On MySQL, the latter throws an exception (every time as far as I can tell, even tho' .getParameterMetaData succeeds). That introduces quite a performance penalty for MySQL (about 10% in my tests).

However, if I treated MySQL as a special case and swallowed all exceptions and had a fallback to the original .setObject call, I could get everything to pass the test suite without a noticeable performance overhead (still measuring the latest MySQL performance as I write this).

Although it will probably slow everyone down a little, having to make these calls, it should be more robust and may make the .setObject calls faster in some situations?

Comment by Sean Corfield [ 07/Apr/13 1:24 AM ]

Fixed in 0.3.0-SNAPSHOT. May need to be revisited for performance tweaks later. Uncomfortable with MySQL being a special case here...

Comment by Sean Corfield [ 17/Apr/13 6:48 PM ]

This fix breaks PostgreSQL and already has MySQL as an exception so I'm going to roll it back and we'll have to find another way to support Teradata I'm afraid!).

Comment by Paudi Moriarty [ 22/Apr/13 4:26 AM ]

PostgreSQL also complains about use of setObject with a null object and no parameter type when using protocolVersion 3.

Seems to work ok with protocolVersion 2 though.

Comment by Felix Andrews [ 25/Jun/13 9:15 PM ]

How about some kind of option to activate the .setNull behaviour?
e.g. binding [*jdbc-set-null* true]?

Comment by Gary Fredericks [ 02/Aug/13 10:00 AM ]

Attached a patch that adds an ISQLParameter protocol that handles the .setObject call but can be overridden for custom types.

The most awkward part of this is that I had to implement the protocol for Object and nil (in order to cover everything), but the teradata case would require re-implementing for nil. As far as I can tell this would work fine, but overriding a protocol implementation feels sketchy.

I suppose an alternative would be to not provide any impls, and then check (satisfies? ISQLParameter x) and do the default behavior if not. Not sure if that would perform worse though.

Comment by Paudi Moriarty [ 02/Aug/13 10:09 AM ]

Beware that satisfies? is slow in the negative case. I'd avoid it here if possible.

Comment by Gary Fredericks [ 08/Oct/13 9:49 PM ]

Revisiting this since it keeps coming up at work.

I feel like the main downside to my patch's approach is that it precludes interacting with two different kinds of databases in the same process – if I want to connect to postgres in one function and have it write nulls one way, while writing nulls a different way to teradata, this isn't really possible.

The only other idea I have is passing a "setter" function as an option, which would do basically the same thing the protocol does in my patch.

Any ideas?

Comment by Sean Corfield [ 03/Nov/13 11:07 PM ]

Since this really is a per-db issue - and restricted to a handful of DBs (or fewer) - my sense is that we should do something with the db-spec itself to offer a way to override how set-parameters works. Having looked at the code, here's my proposal:

Wherever the code calls set-parameters in the context of a db-spec, it will check for a :set-parameters key in the db-spec and use that if present, else default to the normal set-parameters call.

This would allow you to add :set-parameters to your db-spec with a function that mimicked the built-in set-parameters except handled null values in a different way.

I'm going to make this change for 0.3.0-beta1 (coming real soon) and see how it works for you...

Comment by Sean Corfield [ 03/Nov/13 11:17 PM ]

Asserting "fixed" by virtue of allowing set-parameters to be overridden per-db-spec.

Comment by Felix Andrews [ 07/Nov/13 10:58 PM ]

Good solution. Many thanks!





[JDBC-47] Transactions do not rollback on non-Exception Throwables Created: 26/Feb/13  Updated: 01/Jun/16  Resolved: 26/Feb/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Gary Fredericks Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None
Environment:

0.2.3


Attachments: Text File JDBC-47-p1.patch    

 Description   

transaction* currently catches Exceptions for its rollback logic. This means Throwables subclassing java.lang.Error are not caught, and indeed the transaction seems to be immediately committed in this case (at least if the connection has autocommit set to true).

This happened to me in the case of an OutOfMemoryError. Another example is that the following code commits:

(transaction
  (insert-records "table" {:foo 12})
  (assert false))

Presumably this can be fixed by catching Throwable instead of Exception. I'll prepare a patch to that effect.



 Comments   
Comment by Sean Corfield [ 26/Feb/13 4:48 PM ]

Thanx Gary. In the latest source there may be two pieces of transaction logic so please check for anywhere else that Exception is mentioned!

Comment by Gary Fredericks [ 26/Feb/13 5:17 PM ]

This patch changes Exception to Throwable in two places – transaction* and db-transaction*. The tests pass.

Comment by Sean Corfield [ 26/Feb/13 8:08 PM ]

Patch applied.





[JDBC-89] If query is called with a bare SQL string instead of a vector, SQLite driver dumps Created: 29/Jan/14  Updated: 01/Jun/16  Resolved: 30/Jan/14

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   
(query db "select * from test") ;; should be a vector

Causes SQLite to dump and probably isn't very pretty on other DBs. java.jdbc should either validate arguments better, or accept a string and package it up itself.



 Comments   
Comment by Sean Corfield [ 30/Jan/14 12:26 AM ]
(query db "select * from fruit")

is now legal but will remain undocumented. Since optional positional arguments are expected, any attempt to call query with SQL and params not in a vector will throw an IllegalArgumentException anyway. This seems like a reasonable compromise.





[JDBC-105] the :as-array? option to query changes the chunkiness of the result Created: 27/Feb/15  Updated: 01/Jun/16  Resolved: 11/Mar/15

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Kevin Downey Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None


 Description   

jdbc/query uses doall to force results when :as-array? is false and uses vec when :as-array? is true. if you process the result as a seq after that, seqs on vectors are chunked, while the seq from the doall is not, so you can get very different behaviour

:as-array? (rows as vectors) seems like it should be orthogonal to changing chunking



 Comments   
Comment by Sean Corfield [ 27/Feb/15 4:33 PM ]

Good catch. So use doall in both cases and convert via vec in the :as-array? case at the end?

Comment by Sean Corfield [ 27/Feb/15 4:34 PM ]

(or (comp vec doall) perhaps?)

Comment by Kevin Downey [ 27/Feb/15 6:23 PM ]

it doesn't matter how you construct the pvector, calling seq on it will return a chunked seq https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/PersistentVector.java#L277-L279

Comment by Kevin Downey [ 27/Feb/15 6:29 PM ]

the issue, I guess, isn't the chunking, it is changing the return type of the collection based on :is-array?. one of the differences between the two return types is the chunkedness of the seq you get by calling seq on each

Comment by Sean Corfield [ 11/Mar/15 1:24 PM ]

Then I guess I'm not sure what is actionable here. Users specifically requested the ability to return results as vectors instead of maps and so :as-arrays? does exactly what it says on the tin – and the chunkiness is just an artifact of requesting a different result type. Closing this out but curious as to what, if anything, would be appropriate in your mind here...?

Comment by Sean Corfield [ 11/Mar/15 1:25 PM ]

:as-arrays? was added in response to user requests and does what those users wanted. The chunkiness is just an artifact of vectors that seems unimportant (at least to those users).





[JDBC-119] Clean up syntax for insert row / insert multiple row to improve options handling Created: 09/Apr/16  Updated: 01/Jun/16  Resolved: 10/Apr/16

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Version 0.5.5 introduced a single options map, flagged by the :options argument. This is a compromise while I figure out the better way to change insert! to be a better behaved function, instead of being inherently variadic.



 Comments   
Comment by Sean Corfield [ 09/Apr/16 6:45 PM ]

This is a fair bit more tricky than create-table-ddl because there are several arities and argument combinations supported by a single function today (which, in hindsight, was clearly a bad idea!):

(insert! db table {:a row :of data}) ; the simplest case
(insert! db table {:first row :of data} {:second row :of data} {:etc etc} ...) ; insert an arbitrary number of rows
(insert! db table [:col1 :col2 :col3] [val1-1 val1-2 val1-3] [val2-1 val2-2] [val2-3] ...) ; insert an arbitrary number of rows by ordered columns

In that third case, the column names can be omitted – nil can be passed instead of that vector.

In addition to all of that, we want to be able to optionally pass an options map at the end.

For the multiple row cases, we could require the rows to be a sequence of maps (for case 2 above) or a sequence of vectors (for case 3):

(insert! db table [{:first row :of data} {:second row :of data} {:etc etc} ...]) ; insert an arbitrary number of rows
(insert! db table [:col1 :col2 :col3] [[val1-1 val1-2 val1-3] [val2-1 val2-2] [val2-3] ...]) ; insert an arbitrary number of rows by ordered columns

We can distinguish those cases easily enough:

(defn insert!
  ([db table rows] ...)
  ([db table cols values] ...))

We can even add the optional map of options:

(defn insert!
  ([db table rows] ...)
  ([db table cols-or-rows values-or-opts]
    (if (map? values-or-opts)
      ;; insert db table rows opts
      (insert-rows db table cols-or-rows values-or-opts)
      ;; insert db cols values opts
      (insert-cols db table cols-or-rows values-or-opts {})))
  ([db table cols values opts]
    (insert-cols db table cols values opts)))

This ignores the single row case, but we can add that in and we're still fine, although the logic is getting ugly:

(defn insert!
  ([db table row-or-rows] ; this case is ok
    (if (map? row-or-rows)
      ;; insert db table row opts
      (insert-one-row db table row-or-rows {})
      ;; insert db table rows opts
      (insert-rows db table row-or-rows {})))
  ([db table cols-or-rows values-or-opts]
    (if (map? values-or-opts)
      (if (map? cols-or-rows)
        ;; insert db table row opts
        (insert-one-row db table cols-or-rows values-or-opts)
        ;; insert db table rows opts
        (insert-rows db table cols-or-rows values-or-opts))
      ;; insert db cols values opts
      (insert-cols db table cols-or-rows values-or-opts {})))
  ([db table cols values opts]
    (insert-cols db table cols values opts)))

However, we cannot also maintain complete backward compatibility with the current calling sequence because of this case:

(insert! db table {:first row :of data} {:second row :of data})

We can disambiguate all other forms of the combined old and new syntax (we can tell sequence of rows from sequence of cols by (map? (first rows-or-cols))) and for the old columns syntax, we have repeated vectors optionally followed by a keyword - either deprecated options unrolled or the :options flag.

We could assume that the second map is options if a) it is empty or b) contains only keys from the set that insert! allows, namely :transaction? and :entities. That might conflict with inserting into a table with a column named entities where all the other columns were optional (are there databases where transaction? would be a valid column name?), so we would additionally need to check that the entities value was something function-like...

And we'd still need to support keywords in the argument list, even if they are deprecated, until 0.6.0 comes along.

Comment by Sean Corfield [ 09/Apr/16 7:05 PM ]

We could also just deprecate insert! altogether and introduce insert-one! and insert-multi! but that breaks the asymmetry of the other methods.

Note that execute! already has a slightly strange calling sequence for applying a SQL statement multiple times:

(execute! db ["some SQL string" first set of data]) ; default of {:multi? false}
(execute! db ["some SQL string" [first set of data] [second set of data]] {:multi? true})

That's required because a SQL parameter can have an array value so we need to distinguish between those cases explicitly – and it wouldn't help with insert! since we can tell the difference between a single row and a series of rows. However, it does highlight that when inserting values by column, the proposed sequence of value vectors isn't entirely safe since you might be trying to insert such an array-valued SQL parameter

So perhaps a better path here is to deprecate the multi-row version of insert! and use it purely for single row insertions, and then add insert-multi! for the multiple row case – and it would require sequence of row values. This would still require some disambiguation logic for the arity-4 case but it would be simple, like the original [db table cols-or-rows values-or-opts] case above. This approach would mean that insert-multi! would not need to support any legacy (deprecated) cases and therefore could be "clean".

It wouldn't address the ambiguity of (insert! db table map-1 map-2) for legacy support but that horribleness would only be temporary until 0.6.0 came along.

Comment by Sean Corfield [ 10/Apr/16 1:29 AM ]

Implemented in 0.5.6. Clojure Guides updated. Release queued.





[JDBC-4] Global connection mechanism / connection pooling Created: 08/May/11  Updated: 01/Jun/16  Resolved: 10/Aug/11

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Nicolas Buduroi: What I would like to see added is a global connection mechanism like in ClojureQL and Lobos. We could even add a connection pool behind the scene.



 Comments   
Comment by Sean Corfield [ 09/Jul/11 12:23 PM ]

For production usage, I've been relying on c3p0 for connection pooling which works very well with c.j.j so I'm not sure that we need something specific within the library itself. What we do need, of course, is documentation on how to use something like c3p0 for connection pooling with c.j.j!

Comment by Sean Corfield [ 17/Jul/11 3:31 AM ]

Given the ease of using 3rd-party connection pooling libraries like c3p0, I think this is low priority (and I may simply close out after documenting how to use c3p0 with c.j.j).

Comment by Sean Corfield [ 10/Aug/11 2:21 AM ]

I've documented it here: https://github.com/clojure/java.jdbc/blob/master/doc/clojure/java/jdbc/ConnectionPooling.md

I don't believe this should be in c.j.j itself so I'm going to close this.

Comment by Sean Corfield [ 10/Aug/11 2:22 AM ]

Documented how to use a 3rd party library for connection pooling which I think is sufficient since it's so easy.





[JDBC-63] Rename and expose db-with-query-results* Created: 26/Jun/13  Updated: 01/Jun/16  Resolved: 03/Nov/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Minor
Reporter: Justin Kramer Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None

Attachments: Text File resultset.patch    
Patch: Code

 Description   

I wanted to leverage java.jdbc but provide my own result-set-seq function. The only way to do that was to use alter-var-root, reach into private functions, and other such hacks.

The attached patch renames db-with-query-results* to db-with-result-set*, makes it public, and makes its func expect a ResultSet.

This separates result set processing from query execution. I.e., I can ask the library to execute some SQL against a DB spec, return a ResultSet, and then I'm free to process it as I see fit.

All tests pass.



 Comments   
Comment by Sean Corfield [ 26/Jun/13 5:27 PM ]

I can't say I like that approach. I really don't want to expose the ...* implementation functions. Can you explain what you needed to do differently in your result-set-seq function? Making that extensible by some other approach seems like a better way to tackle this...

Comment by Justin Kramer [ 26/Jun/13 5:48 PM ]

Pasting from our chat on IRC:

<jkkramer> I was initially thinking that (I have a use case in mind I'll explain) but thought this approach allowed for maximum flexibility for callers
<jkkramer> my particular use case is I wanted custom column labels
<jkkramer> which could be accomplished with a custom-label keyword arg or similar to result-set-seq, which is a function that takes rsmeta and idx
<jkkramer> however – i also wanted to be able to return additional per-column metadata such as the table name, which could be used later on to e.g. create nested hash map results. it starts getting complicated to rely on result-set-seq. being able to write my own ResultSet-processing function would be nice
<jkkramer> but i don't want to have to bother with connections, prepared statements, etc. just ResultSet processing

Comment by Sean Corfield [ 15/Sep/13 3:04 PM ]

FWIW, I just ran into a situation where I would like to be able to return a raw ResultSet from query so I'll have to give this some thought. Won't make 0.3.0 tho'...

Comment by Justin Kramer [ 16/Sep/13 6:55 AM ]

Just to reinforce the point: I don't see it as exposing implementation details so much as exposing the host – i.e., JDBC building blocks. Being able to process a ResultSet without having to manage the connection details would be extremely useful to me. I can make do until 0.4.0+ though.

Comment by Sean Corfield [ 03/Nov/13 10:41 PM ]

I've decided to incorporate a variant of this (since I was in messing with the guts of the query function anyway).

db-with-query-results* is now a public function called db-query-with-resultset but otherwise does exactly what you wanted, based on your patch.

Comment by Justin Kramer [ 04/Nov/13 2:15 PM ]

Excellent! Thanks, Sean. I'll be taking beta1 for a spin shortly.





[JDBC-53] create-table & drop-table doesn't honor naming strategy Created: 24/Apr/13  Updated: 01/Jun/16  Resolved: 15/Sep/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Roman Scherer Assignee: Sean Corfield
Resolution: Completed Votes: 1
Labels: None


 Description   

create-table & drop-table doesn't take the current naming strategy into account.



 Comments   
Comment by Sean Corfield [ 24/Apr/13 11:08 AM ]

Just to note: the naming strategy stuff, based on dynamically bound globals, is all deprecated in 0.3.0. Alternatives to create-table and drop-table, which respect the new entities function approach, will be provided in an upcoming alpha build of 0.3.0.

Comment by David James [ 11/May/13 8:51 PM ]

Any update on this? I noticed the deprecations while reading the code. Could you say a few words or add a few docs on the 0.3 way of doing naming strategies?

Comment by Sean Corfield [ 11/May/13 11:20 PM ]

The replacement is entities and identifiers (as keyword arguments in java.jdbc functions and as macros in java.jdbc.sql). You can see some examples here:

http://clojure.github.io/java.jdbc/doc/clojure/java/jdbc/NameMapping.html

Replacements for create-table / drop-table will be added before beta1. I'm not sure right now how many more alphas we'll have. There's a lot of churn in the code right now and I want it to be feature complete and fairly settled before the first beta.

Comment by Matt Oquist [ 12/May/13 8:56 PM ]

Thanks for the update; looking forward to it!

Comment by Sean Corfield [ 15/Sep/13 3:39 PM ]

Will be in alpha5





[JDBC-71] CLJ-1252 (Clojure 1.6) invalidates keywords that start with digits Created: 28/Oct/13  Updated: 01/Jun/16  Resolved: 28/Oct/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

A ResultSet can contain pretty much any keys. As of CLJ-1252 being applied, keywords are more restricted now. It's not clear what should be done with keys in a ResultSet that cannot be converted to keywords... left as strings perhaps?



 Comments   
Comment by Sean Corfield [ 28/Oct/13 4:49 PM ]

Tests updated to use (keyword "1") instead of :1.





[JDBC-75] Provide a way to pass options into a transaction Created: 15/Nov/13  Updated: 01/Jun/16  Resolved: 12/Dec/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

The obvious solution here is to allow keyword/value pairs in the db-transaction "binding" spec and pass those through to the Java transaction code.

Exact details TBD.



 Comments   
Comment by Sean Corfield [ 12/Dec/13 8:15 PM ]

Looks like just isolation level is supported?

http://docs.oracle.com/javase/6/docs/api/java/sql/Connection.html#setTransactionIsolation(int)

Comment by Sean Corfield [ 12/Dec/13 8:49 PM ]

with-db-transaction allows :isolation in the binding with values :read-committed, :read-uncommitted, :repeatable-read, :serializable





[JDBC-76] Upgrading to any version past 0.3.0-alpha4 causes Exception with result_set_read_column Created: 20/Nov/13  Updated: 01/Jun/16  Resolved: 20/Nov/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Thomas Steffes Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: None
Environment:

Mac OSX 10.9
Leiningen 2.3.3 on Java 1.7.0_15 Java HotSpot(TM) 64-Bit Server VM
Using this version of postgresql [postgresql "9.1-901-1.jdbc4"]



 Description   

If I upgrade to any version past 0.3.0-alpha4, I get this exception when attempting to run my ring server:

Exception in thread "main" java.lang.IllegalArgumentException: No single method: result_set_read_column of interface: clojure.java.jdbc.IResultSetReadColumn found for function: result-set-read-column of protocol: IResultSetReadColumn, compiling:(clojure/java/jdbc.clj:275:49)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6567)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyze(Compiler.java:6322)
at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:5708)
at clojure.lang.Compiler$FnMethod.parse(Compiler.java:5139)
at clojure.lang.Compiler$FnExpr.parse(Compiler.java:3751)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6558)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6548)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyze(Compiler.java:6322)
at clojure.lang.Compiler$InvokeExpr.parse(Compiler.java:3624)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6562)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyze(Compiler.java:6322)
at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:5708)
at clojure.lang.Compiler$FnMethod.parse(Compiler.java:5139)
at clojure.lang.Compiler$FnExpr.parse(Compiler.java:3751)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6558)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6548)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.access$100(Compiler.java:37)
at clojure.lang.Compiler$LetExpr$Parser.parse(Compiler.java:5973)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6560)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6548)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyze(Compiler.java:6322)
at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:5708)
at clojure.lang.Compiler$LetExpr$Parser.parse(Compiler.java:6009)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6560)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6548)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyze(Compiler.java:6322)
at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:5708)
at clojure.lang.Compiler$FnMethod.parse(Compiler.java:5139)
at clojure.lang.Compiler$FnExpr.parse(Compiler.java:3751)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6558)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6548)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.access$100(Compiler.java:37)
at clojure.lang.Compiler$DefExpr$Parser.parse(Compiler.java:529)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6560)
at clojure.lang.Compiler.analyze(Compiler.java:6361)
at clojure.lang.Compiler.analyze(Compiler.java:6322)
at clojure.lang.Compiler.eval(Compiler.java:6623)
at clojure.lang.Compiler.load(Compiler.java:7064)
at clojure.lang.RT.loadResourceScript(RT.java:370)
at clojure.lang.RT.loadResourceScript(RT.java:361)
at clojure.lang.RT.load(RT.java:440)
at clojure.lang.RT.load(RT.java:411)
at clojure.core$load$fn__5018.invoke(core.clj:5530)
at clojure.core$load.doInvoke(core.clj:5529)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invoke(core.clj:5336)
at clojure.core$load_lib$fn__4967.invoke(core.clj:5375)
at clojure.core$load_lib.doInvoke(core.clj:5374)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$load_libs.doInvoke(core.clj:5413)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$require.doInvoke(core.clj:5496)
at clojure.lang.RestFn.invoke(RestFn.java:436)
at eq_server.db$eval2811$loading_4910auto___2812.invoke(db.clj:1)
at eq_server.db$eval2811.invoke(db.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6619)
at clojure.lang.Compiler.eval(Compiler.java:6608)
at clojure.lang.Compiler.load(Compiler.java:7064)
at clojure.lang.RT.loadResourceScript(RT.java:370)
at clojure.lang.RT.loadResourceScript(RT.java:361)
at clojure.lang.RT.load(RT.java:440)
at clojure.lang.RT.load(RT.java:411)
at clojure.core$load$fn__5018.invoke(core.clj:5530)
at clojure.core$load.doInvoke(core.clj:5529)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invoke(core.clj:5336)
at clojure.core$load_lib$fn__4967.invoke(core.clj:5375)
at clojure.core$load_lib.doInvoke(core.clj:5374)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$load_libs.doInvoke(core.clj:5413)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$require.doInvoke(core.clj:5496)
at clojure.lang.RestFn.invoke(RestFn.java:457)
at eq_server.models.peek$eval2164$loading_4910auto___2165.invoke(peek.clj:1)
at eq_server.models.peek$eval2164.invoke(peek.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6619)
at clojure.lang.Compiler.eval(Compiler.java:6608)
at clojure.lang.Compiler.load(Compiler.java:7064)
at clojure.lang.RT.loadResourceScript(RT.java:370)
at clojure.lang.RT.loadResourceScript(RT.java:361)
at clojure.lang.RT.load(RT.java:440)
at clojure.lang.RT.load(RT.java:411)
at clojure.core$load$fn__5018.invoke(core.clj:5530)
at clojure.core$load.doInvoke(core.clj:5529)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invoke(core.clj:5336)
at clojure.core$load_lib$fn__4967.invoke(core.clj:5375)
at clojure.core$load_lib.doInvoke(core.clj:5374)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$load_libs.doInvoke(core.clj:5413)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$require.doInvoke(core.clj:5496)
at clojure.lang.RestFn.invoke(RestFn.java:551)
at eq_server.controllers.peeks$eval2133$loading_4910auto___2134.invoke(peeks.clj:1)
at eq_server.controllers.peeks$eval2133.invoke(peeks.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6619)
at clojure.lang.Compiler.eval(Compiler.java:6608)
at clojure.lang.Compiler.load(Compiler.java:7064)
at clojure.lang.RT.loadResourceScript(RT.java:370)
at clojure.lang.RT.loadResourceScript(RT.java:361)
at clojure.lang.RT.load(RT.java:440)
at clojure.lang.RT.load(RT.java:411)
at clojure.core$load$fn__5018.invoke(core.clj:5530)
at clojure.core$load.doInvoke(core.clj:5529)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invoke(core.clj:5336)
at clojure.core$load_lib$fn__4967.invoke(core.clj:5375)
at clojure.core$load_lib.doInvoke(core.clj:5374)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$load_libs.doInvoke(core.clj:5417)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$require.doInvoke(core.clj:5496)
at clojure.lang.RestFn.invoke(RestFn.java:482)
at eq_server.routes$eval2107$loading_4910auto___2108.invoke(routes.clj:1)
at eq_server.routes$eval2107.invoke(routes.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6619)
at clojure.lang.Compiler.eval(Compiler.java:6608)
at clojure.lang.Compiler.load(Compiler.java:7064)
at clojure.lang.RT.loadResourceScript(RT.java:370)
at clojure.lang.RT.loadResourceScript(RT.java:361)
at clojure.lang.RT.load(RT.java:440)
at clojure.lang.RT.load(RT.java:411)
at clojure.core$load$fn__5018.invoke(core.clj:5530)
at clojure.core$load.doInvoke(core.clj:5529)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invoke(core.clj:5336)
at clojure.core$load_lib$fn__4967.invoke(core.clj:5375)
at clojure.core$load_lib.doInvoke(core.clj:5374)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$load_libs.doInvoke(core.clj:5413)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$require.doInvoke(core.clj:5496)
at clojure.lang.RestFn.invoke(RestFn.java:457)
at user$eval5.invoke(form-init8113116856213476175.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6619)
at clojure.lang.Compiler.eval(Compiler.java:6608)
at clojure.lang.Compiler.load(Compiler.java:7064)
at clojure.lang.Compiler.loadFile(Compiler.java:7020)
at clojure.main$load_script.invoke(main.clj:294)
at clojure.main$init_opt.invoke(main.clj:299)
at clojure.main$initialize.invoke(main.clj:327)
at clojure.main$null_opt.invoke(main.clj:362)
at clojure.main$main.doInvoke(main.clj:440)
at clojure.lang.RestFn.invoke(RestFn.java:421)
at clojure.lang.Var.invoke(Var.java:419)
at clojure.lang.AFn.applyToHelper(AFn.java:163)
at clojure.lang.Var.applyTo(Var.java:532)
at clojure.main.main(main.java:37)
Caused by: java.lang.IllegalArgumentException: No single method: result_set_read_column of interface: clojure.java.jdbc.IResultSetReadColumn found for function: result-set-read-column of protocol: IResultSetReadColumn
at clojure.lang.Compiler$InvokeExpr.<init>(Compiler.java:3423)
at clojure.lang.Compiler$InvokeExpr.parse(Compiler.java:3630)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6562)
... 169 more



 Comments   
Comment by Sean Corfield [ 20/Nov/13 6:02 PM ]

Unable to reproduce - I have alpha5 in production so I know it works. I suspect you have old compiled code in your project: try lein clean and see if that fixes it.

Comment by Thomas Steffes [ 21/Nov/13 9:29 AM ]

Indeed that was the problem. Should have tried that before filing. Thanks!

Comment by Sean Corfield [ 21/Nov/13 11:15 AM ]

No problem. Glad it was that easy to fix





[JDBC-79] Add function to group multiple (nested) statements / method calls on one connection without commit / rollback mechanics Created: 11/Dec/13  Updated: 01/Jun/16  Resolved: 12/Dec/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Niels van Klaveren Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

Currently, it is very easy to get spurious connection use on multiple statements,for instance by mapping a query function over a collection.
A workaround for this can be done by providing a binding through db-transaction.
However, this complects single connection use with commit / rollback mechanics that might be unwanted.

Another option is providing the binding yourself by wrapping it in a with-open/let construction.
This needs people to get into the implementation of the connection system, and might not play well with it.

Using connection pools is the last option, but usually is overkill for simple tasks.

Creating a macro and function that provides a simple binding for a connection in the way db-transaction does, without the included commit/rollback mechanics would do the trick. Such a function would play nice with the rest of the new connection propagation system (even including levels), and still doesn't rely on the rebinding of global vars.



 Comments   
Comment by Sean Corfield [ 11/Dec/13 10:55 AM ]

FYI: https://github.com/clojuredocs/guides/blob/master/articles/ecosystem/java_jdbc/home.md#reusing-connections

I'll probably add with-db-connection for this.

Comment by Sean Corfield [ 12/Dec/13 6:57 PM ]

Added with-db-connection. Will be in RC1.





[JDBC-80] Add support for generic java.sql.DatabaseMetaData use now connection management is more explicit. Created: 11/Dec/13  Updated: 01/Jun/16  Resolved: 12/Dec/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Niels van Klaveren Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

In clojure.java.jdbc 0.2.3 it was easy to call methods on the connection, like .getDatabaseMetaData.

Now connection management has changed, and finding, reaching and keeping connections (and accompanying DatabaseMetaData) is a lot harder.

It would be good to have two new functions that would give access to all database metadata information in a minimalistic way, and keeps connection management in line with the new clojure.java.jdbc 0.3.0 ideals.

The first is a function that gets / sets up a connection by the new connection rules (find-, get-, add-connection), and provides a local binding for the DatabaseMetaData object resulting from calling .getDatabaseMetaData on that connection.
Of course, like a transaction, this also means the actions done within that binding needs a single connection that's kept open for the duration of the binding block.

The second is a function (or enhancement of result-set-seq) that calls result-set-seq if the argument is (instance? java.sql.ResultSet) or else the argument itself; this is because metadata method calls can either give back a resultset that needs to be clojurized (f.i. .getTables)or an object by themselves (f.i. .getDriverName). Let's call it meta-result as an example.

That would make things like the following possible:

(db-metadata
[m db]
(map
#(meta-result (.getIndexInfo m nil nil (% :table_name) false false))
(meta-result (.getTables m nil schema nil (into-array String ["TABLE"])))))

As far as I can see that's a pretty minimalistic set of functions that unlocks all DatabaseMetaData possibilities. Also, because it integrates with the new connection rules, it would integrate nicely with transactions, in case of queries that are dynamically generated on metadata within the same transaction.



 Comments   
Comment by Sean Corfield [ 11/Dec/13 11:00 AM ]

Nice suggestion. Thanx. with-db-metadata is probably more in line with the current naming and perhaps metadata-result to tie back to that name, rather than cause any confusion with Clojure's built-in meta?

Comment by Niels van Klaveren [ 11/Dec/13 11:27 AM ]

Glad you like it. For my use it was the only basic option still missing from clojure.java.jdbc.
The names sounds fine to me, I agree any possible confusion with core's meta should be evaded.

Comment by Sean Corfield [ 12/Dec/13 7:56 PM ]

Added with-db-metadata and metadata-result. Will be in RC1.





[JDBC-86] Provide better protocol-based extension point for setting parameters in a SQL statement Created: 26/Dec/13  Updated: 01/Jun/16  Resolved: 29/Dec/13

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Enhancement Priority: Major
Reporter: Sean Corfield Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: None


 Description   

The current hooks are:

  • ISQLValue protocol based solely on the value, but still uses .setObject to add the parameter value
  • :set-parameters key in db-spec, to override the entire parameter setting mechanism

Marcus Brito suggested this: https://github.com/clojure/java.jdbc/pull/34

That's a cleaner extension point (that still preserves the basic ISQLValue hook) but I can't accept a PR as-is.



 Comments   
Comment by Sean Corfield [ 29/Dec/13 6:24 PM ]

Added ISQLParameter / set-parameter protocol / function.





[JDBC-94] Parsing of sqlite or HSQLDB in-memory databases fail Created: 01/Apr/14  Updated: 01/Jun/16  Resolved: 30/Jun/14

Status: Closed
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Peter Schwarz Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: bug
Environment:

Using the following driver dependencies:
[org.hsqldb/hsqldb "2.3.2"]
[org.xerial/sqlite-jdbc "3.7.2"]



 Description   

When entering a sqlite memory db uri: "jdbc:sqlite::memory:", the following error occurs:

SQLException out of memory org.sqlite.DB.throwex (DB.java:370)

This is a vague exception which occurs when the db is not actually created (see https://bitbucket.org/xerial/sqlite-jdbc).

Likewise, when parsing a HSQLDB in-memory db url: "jdbc:hsqldb:mem:testdb", the following error occurs:

2014-04-01T14:13:11.996-0500  SEVERE  could not reopen database
org.hsqldb.HsqlException: Database lock acquisition failure: lockFile: org.hsqldb.persist.LockFile@2be0e04[file =/.lck, exists=false, locked=false, valid=false, ] method: openRAF reason: java.io.FileNotFoundException: /.lck (Permission denied)

It appears to be creating the equivalent connection for

{:subprotocol "hsqldb" :subname ""}
but should be creating