<< Back to previous view

[JDBC-94] Parsing of sqlite or HSQLDB in-memory databases fail Created: 01/Apr/14  Updated: 30/Jun/14  Resolved: 30/Jun/14

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Peter Schwarz Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: bug
Environment:

Using the following driver dependencies:
[org.hsqldb/hsqldb "2.3.2"]
[org.xerial/sqlite-jdbc "3.7.2"]



 Description   

When entering a sqlite memory db uri: "jdbc:sqlite::memory:", the following error occurs:

SQLException out of memory org.sqlite.DB.throwex (DB.java:370)

This is a vague exception which occurs when the db is not actually created (see https://bitbucket.org/xerial/sqlite-jdbc).

Likewise, when parsing a HSQLDB in-memory db url: "jdbc:hsqldb:mem:testdb", the following error occurs:

2014-04-01T14:13:11.996-0500  SEVERE  could not reopen database
org.hsqldb.HsqlException: Database lock acquisition failure: lockFile: org.hsqldb.persist.LockFile@2be0e04[file =/.lck, exists=false, locked=false, valid=false, ] method: openRAF reason: java.io.FileNotFoundException: /.lck (Permission denied)

It appears to be creating the equivalent connection for

{:subprotocol "hsqldb" :subname ""}
but should be creating
{:subprotocol "hsqldb" :subname "mem:testdb" }



 Comments   
Comment by Sean Corfield [ 01/Apr/14 3:17 PM ]

Thanx Peter. That sounds reasonably easy to repro so I'll create a couple of test cases shortly.

Comment by Sean Corfield [ 30/Jun/14 9:09 PM ]

Finally taking a look at these - sorry for the delay - and it seems that java.jdbc expects a string argument to be something you can simply construct a URI out of and then pass to get-connection so this suggests that either java.net.URI might be monkeying with the value or parse-properties-uri can't figure out how to reconstruct the appropriate spec map from such a URI.

Comment by Sean Corfield [ 30/Jun/14 9:34 PM ]

Will be in 0.3.5.





[JDBC-50] insert-rows (incorrectly) patches incomplete records Created: 21/Mar/13  Updated: 06/Apr/13  Resolved: 06/Apr/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Cees van Kemenade Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: bug, docs
Environment:

Postgres database on Debian machine


Attachments: File jdbcIssue.clj    

 Description   

When using sql/insert-rows with a dataset that contains incomplete rows the rows will be patched with values from the preceding rows. The behaviour to be expected is not documented.

I would expect insert-rows either only accepts complete rows. When accepting incomplete rows I would expect patching with nil-values instead of values from other rows.

Attached you find an example and the output when running in comments.



 Comments   
Comment by Sean Corfield [ 06/Apr/13 3:10 PM ]

This looks like an old / long-standing bug that no one has run into before.

In the new (soon-to-be 0.3.0) release, (clojure.java.jdbc.sql/insert :table [1 1] [2] []) would throw an exception:

IllegalArgumentException insert called with inconsistent number of columns / values clojure.java.jdbc.sql/insert-multi-row (sql.clj:116)

I will update clojure.java.jdbc/insert-rows to throw a similar exception.

Comment by Sean Corfield [ 06/Apr/13 3:38 PM ]

Fixed in 0.3.0-SNAPSHOT by throwing an exception if the value-groups are not all the same length.

Currently relies on the DB to trap column names not matching value groups (which is known not to work on SQLite but does work on other DBs as far as I know).

insert-rows is deprecated in 0.3.0 - use insert! instead going forward (which does more validation).





[JDBC-49] jdbc insert-records can not handle spaces in columnames Created: 21/Mar/13  Updated: 06/Apr/13  Resolved: 06/Apr/13

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Cees van Kemenade Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: bug
Environment:

Postgres on Debian


Attachments: File jdbcIssue2.clj    

 Description   

If a database contains column-names that include spaces an sql/insert-record will fail. However, inserting via sql/do-commands or sql/insert-rows works as expected.

Attached you find a file showing the error.



 Comments   
Comment by Cees van Kemenade [ 22/Mar/13 1:21 AM ]

Further analysis based on a suggestion of Sean leads to:

It seems to be that there is an issue in the naming-strategy in this issue. Either in the way I define it of the way it is implemented
(sql/with-connection db
(sql/with-naming-strategy {:keyword identity :entity (fn [x] (str \" x \"))}
...
))

When I remove the naming-strategy and replace the offending insert by a manual quoting strategy via
(sql/insert-records "test_jdbc_naming3" {((fn [x] (str \" x \")) "b- b") 7})
which corresponds to
(sql/insert-records "test_jdbc_naming3" {"\"b- b\"" 7})
then the code runs fine.

So the issue is located in the handling of the naming-strategy, or in the way I defined the naming-strategy (for postgres).

Comment by Sean Corfield [ 06/Apr/13 4:14 PM ]

Fixed in 0.3.0-SNAPSHOT by wiring as-str variable into compatibility layer as :entities argument.





[JDBC-35] Wrong timezone for java.sql.Date, java.sql.Time and java.sql.Timestamp objects returned by with-query-results Created: 20/Jul/12  Updated: 12/Sep/12  Resolved: 12/Sep/12

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Major
Reporter: Jestine Paul Assignee: Sean Corfield
Resolution: Declined Votes: 0
Labels: bug

Attachments: File resultset-timezone.diff    

 Description   

Our database stores date and time related information in UTC. But, the results from with-query-results creates objects in the local timezone. This is causing a lot of unexpected behavior when constructing Joda DateTime objects in clj-time. Currently, the with-query-results function is using the single argument getter functions in the ResultSet

getDate(int columnIndex)
getTime(int columnIndex)
getTimestamp(int columnIndex)

We can solve this problem if we can optionally pass the timezone information and call the two argument getter functions.

getDate(int columnIndex, Calendar cal)
getTime(int columnIndex, Calendar cal)
getTimestamp(int columnIndex, Calendar cal)



 Comments   
Comment by Sean Corfield [ 21/Jul/12 1:30 AM ]

Can you provide a self-contained test case? Lots of people are using java.jdbc in production without running into this problem, and at World Singles we've had this in production for a long time in a high traffic environment without seeing any problems with timezones. Dates go in and out of the database unchanged, which is exactly as expected - and we have databases running in three different timezones.

Comment by Jestine Paul [ 21/Jul/12 5:29 AM ]

I have added a new test and it is failing with my Postgresql.

https://github.com/jestinepaul/java.jdbc/commit/195397b1b2a0245d2439ab9963fe2138450a27f3

Comment by Sean Corfield [ 21/Jul/12 1:03 PM ]

Looking at recommended practices out there, such as listed here: http://stackoverflow.com/questions/2532729/daylight-saving-time-and-timezone-best-practices one important thing is: "On Servers, set hardware clocks and OS clocks to UTC. Use NTP services on all servers." The same advice is repeated here: http://www.dbspecialists.com/blog/database-theory/intelligent-date-handling/

Looking further into PostgreSQL (a database I generally don't use), I see this: http://stackoverflow.com/questions/6151084/which-timestamp-type-to-choose-in-a-postgresql-database which seems to contain a lot of PostgreSQL-specific stuff. However, that article makes it clear that you need to set the database timezone to UTC in order to SELECT timestamp columns correctly.

In my opinion, the behavior you're seeing is not a bug (in java.jdbc) but an artifact of your environment being set up incorrectly. I'll leave this ticket open for a little while for more discussion but without a concrete patch that is shown to not affect other use cases, I will close this ticket by the end of August.

Comment by Jestine Paul [ 23/Jul/12 1:26 PM ]

As mentioned earlier, the database server timezone is in UTC and the JDBC library runs on a client machine set to local time. If the database has a date value of 2012-7-23, it changes to 2012-7-22 on the client side (if the client is running at timezone greater than UTC) when coerced using to-date-time in clj-time. This is extremely dangerous and is not specific to any database. I noticed it first on Sybase Enterprise Server and I have now also replicated it in the test case with Postgresql.

I have attached a patch which fixes this problem by passing in an optional parameter. The test case is also modified to use clj-time, as it expresses the problem more clearly. Please let me know if you need any more clarification.

p.s. I have already mailed the CA and should reach Durham in a few days.

Comment by Sean Corfield [ 12/Sep/12 8:01 PM ]

Since the proposed patch requires the user to pass in a list of columns to treat specially - and reorders columns in results and adds some performance overhead for all users, not just those wanting to adjust column values - I believe users who choose to run JDBC clients in timezones different to the database, against the widely-listed best practice recommendations, should bear the burden of adjusting the columns themselves in their own client code. Since they already know which columns to adjust and they know the difference between UTC and local time, they should be able to make the adjustments easily enough as part of post-processing the resultset-seq.





[JDBC-33] update-or-insert does not work with oracle Created: 21/May/12  Updated: 10/Jun/12  Resolved: 10/Jun/12

Status: Resolved
Project: java.jdbc
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Defect Priority: Minor
Reporter: Brian Tatnall Assignee: Sean Corfield
Resolution: Completed Votes: 0
Labels: bug
Environment:

clojure 1.3.0
clojure.java.jdbc 0.2.1
oracle 10.2.0.4.0



 Description   

According to the oracle documentation

For a prepared statement batch, it is not possible to know the number of rows affected in the database by each individual statement in the batch. Therefore, all array elements have a value of -2. According to the JDBC 2.0 specification, a value of -2 indicates that the operation was successful but the number of rows affected is unknown.

As documented update-values returns (-2) which means that the (if (zero?)) check in update-or-insert-values will never return true.

Oracle does provide the number of rows updated via an the method getUpdateCount on instances of Statement.

The following change fixes this problem for oracle users, but I am not sure how it will affect other drivers.

(defn oracle-do-prepared
  "Executes an (optionally parameterized) SQL prepared statement on the                                           
  open database connection. Each param-group is a seq of values for all of                                        
  the parameters.                                                                                                 
  Return a seq of update counts (one count for each param-group)."
  [sql & param-groups]
  (with-open [^PreparedStatement stmt (prepare-statement (connection) sql)]
    (if (empty? param-groups)
      (transaction* (ƒ [] (vector (.executeUpdate stmt))))
      (do
        (doseq [param-group param-groups]
          (set-parameters stmt param-group)
          (.addBatch stmt))
        (transaction* (ƒ [] (let [result (.executeBatch stmt)
                                  count  (.getUpdateCount stmt)]
                              (vector count))))))))


 Comments   
Comment by Sean Corfield [ 22/May/12 12:17 AM ]

Interesting. (.getUpdateCount stmt) is a standard part of the JDBC API so it looks reasonable to see if the result of the (.executeBatch stmt) is a single sequence with a -2 value, the code could call (.getUpdateCount stmt) and return that (single) value as a sequence. It looks like that should be completely portable but it will be hard to test since none of the other DBs return -2 as far as I know and I don't have an Oracle install to test against





Generated at Wed Sep 03 00:27:12 CDT 2014 using JIRA 4.4#649-r158309.