java.jdbc

A mechanism for reading specialized types from result sets could be useful

Details

  • Type: Enhancement Enhancement
  • Status: Resolved Resolved
  • Priority: Major Major
  • Resolution: Completed
  • Affects Version/s: None
  • Fix Version/s: None
  • Component/s: None
  • Labels:
    None

Description

I had this idea when trying to adapt Korma to use with postgres tables with columns of array types. I had some success using Korma's transform functions to convert the java.sql.Array objects into a seq on the way out, but recent changes to Korma seem to indicate that the transform functions are not guaranteed to run before the connection is closed. This interferes with efforts to read the arrays, which I think can be their own result sets.

Presumably korma could be changed to support reading the arrays on the way out, but maybe being able to customize the read behavior within c.j.jdbc would be simpler.

The first idea I've had toward this end is to create a protocol with a single function that objects being read out of the resultset get passed through. It would default to the identity function, but users could e.g. extend it to java.sql.Array. I don't know if there are significant performance implications for an extra function call for each value coming out of the database.

I'll have a patch in this vein attached shortly.

  1. JDBC-46-p1.patch
    23/Jan/13 10:26 AM
    2 kB
    Gary Fredericks
  2. JDBC-46-p2.patch
    23/Jan/13 3:53 PM
    3 kB
    Gary Fredericks
  3. JDBC-46-p3.patch
    11/Apr/13 9:32 PM
    3 kB
    Gary Fredericks

Activity

Gary Fredericks made changes -
Field Original Value New Value
Description I had this idea when trying to adapt Korma to use with postgres tables with columns of array types. I had some success using Korma's transform functions to convert the java.sql.Array objects into a seq on the way out, but recent changes to Korma seem to indicate that the transform functions are not guaranteed to run before the connection is closed. This interferes with efforts to read the arrays, which I think can be their own result sets.

Presumably korma could be changed to support reading the arrays on the way out, but maybe being able to customize the read behavior within c.j.jdbc would be simpler.

The first idea I've had toward this end is to create a protocol with a single function that objects being read out of the resultset get passed through. It would default to the identity function, but users could e.g. extend it to java.sql.Array.

I'll have a patch in this vein attached shortly.
I had this idea when trying to adapt Korma to use with postgres tables with columns of array types. I had some success using Korma's transform functions to convert the java.sql.Array objects into a seq on the way out, but recent changes to Korma seem to indicate that the transform functions are not guaranteed to run before the connection is closed. This interferes with efforts to read the arrays, which I think can be their own result sets.

Presumably korma could be changed to support reading the arrays on the way out, but maybe being able to customize the read behavior within c.j.jdbc would be simpler.

The first idea I've had toward this end is to create a protocol with a single function that objects being read out of the resultset get passed through. It would default to the identity function, but users could e.g. extend it to java.sql.Array. I don't know if there are significant performance implications for an extra function call for each value coming out of the database.

I'll have a patch in this vein attached shortly.
Hide
Gary Fredericks added a comment -

Attached patch. Is there any straightforward way to test this? Or a place it should be documented?

Show
Gary Fredericks added a comment - Attached patch. Is there any straightforward way to test this? Or a place it should be documented?
Gary Fredericks made changes -
Attachment JDBC-46-p1.patch [ 11812 ]
Hide
Sean Corfield added a comment -

Docs: just needs good docstring since the clojure.github.com site is auto-generated from those (although it is still triggered manually it seems).

Test: mvn test

I have Derby, HSQLDB, MySQL, MS SQL Server (two drivers), MySQL. I don't have Postgres to test against. By default mvn test only tries Derby, HSQLDB and SQLite. TEST_DBS=... mvn test will test more.

Show
Sean Corfield added a comment - Docs: just needs good docstring since the clojure.github.com site is auto-generated from those (although it is still triggered manually it seems). Test: mvn test I have Derby, HSQLDB, MySQL, MS SQL Server (two drivers), MySQL. I don't have Postgres to test against. By default mvn test only tries Derby, HSQLDB and SQLite. TEST_DBS=... mvn test will test more.
Hide
Sean Corfield added a comment -

Would be nice to have a test with a custom conversion, mapping any String to some constant perhaps? Perhaps a separate test ns that is required and tested from test_jdbc.clj?

Show
Sean Corfield added a comment - Would be nice to have a test with a custom conversion, mapping any String to some constant perhaps? Perhaps a separate test ns that is required and tested from test_jdbc.clj?
Hide
Gary Fredericks added a comment -

Oh, we have to be careful with testing because I believe any extensions to the protocol are going to be effective for all the remaining tests, right?

Show
Gary Fredericks added a comment - Oh, we have to be careful with testing because I believe any extensions to the protocol are going to be effective for all the remaining tests, right?
Hide
Gary Fredericks added a comment -

Attached new patch with docstrings and a test. I've only run the tests with MySQL. Let me know if there's anything else I can do.

Show
Gary Fredericks added a comment - Attached new patch with docstrings and a test. I've only run the tests with MySQL. Let me know if there's anything else I can do.
Gary Fredericks made changes -
Attachment JDBC-46-p2.patch [ 11813 ]
Hide
Gary Fredericks added a comment -

This patch recently came in handy in an different situation, when jdbc was returning Integers instead of Longs, we had a really clean spot to do the conversion.

Show
Gary Fredericks added a comment - This patch recently came in handy in an different situation, when jdbc was returning Integers instead of Longs, we had a really clean spot to do the conversion.
Hide
Sean Corfield added a comment -

There have been a lot of code changes since you submitted the patch and right now the patch doesn't apply. Can you re-submit the patch against the latest master branch so I can try again? Thanx!

Show
Sean Corfield added a comment - There have been a lot of code changes since you submitted the patch and right now the patch doesn't apply. Can you re-submit the patch against the latest master branch so I can try again? Thanx!
Hide
Gary Fredericks added a comment -

I'm getting 3 test failures on master currently (with a basic mvn test), and after applying this new patch (p3) I still get 3 failures. So hopefully that means I didn't break anything.

Show
Gary Fredericks added a comment - I'm getting 3 test failures on master currently (with a basic mvn test), and after applying this new patch (p3) I still get 3 failures. So hopefully that means I didn't break anything.
Gary Fredericks made changes -
Attachment JDBC-46-p3.patch [ 11953 ]
Hide
Sean Corfield added a comment -

Applied. Will be in 0.3.0-alpha2

Show
Sean Corfield added a comment - Applied. Will be in 0.3.0-alpha2
Sean Corfield made changes -
Resolution Completed [ 1 ]
Status Open [ 1 ] Resolved [ 5 ]
Hide
Gary Fredericks added a comment -

I'm thinking this feature is no longer necessary due to the :row-fn option – since the only motivation in the first place was to have the opportunity to transform a result before the connection closes.

There would still be no easy way to do it from Korma, but that's Korma's fault. So maybe we should remove this before 0.3.0 is released?

Show
Gary Fredericks added a comment - I'm thinking this feature is no longer necessary due to the :row-fn option – since the only motivation in the first place was to have the opportunity to transform a result before the connection closes. There would still be no easy way to do it from Korma, but that's Korma's fault. So maybe we should remove this before 0.3.0 is released?
Hide
Sean Corfield added a comment -

This turns out to be a very useful hook for certain database types. I used it to solve an issue with Oracle, to enable users to automatically convert Oracle-specific datatypes to regular values, in JDBC-57. So, thank you... even if you think it isn't necessary now!

Show
Sean Corfield added a comment - This turns out to be a very useful hook for certain database types. I used it to solve an issue with Oracle, to enable users to automatically convert Oracle-specific datatypes to regular values, in JDBC-57. So, thank you... even if you think it isn't necessary now!
Hide
Gary Fredericks added a comment -

That use case could be served by :row-fn, right? You just like that it's easier? I don't mind it if others still find it valuable, I just wanted to make sure we didn't accidentally cement redundant features.

Show
Gary Fredericks added a comment - That use case could be served by :row-fn, right? You just like that it's easier? I don't mind it if others still find it valuable, I just wanted to make sure we didn't accidentally cement redundant features.
Hide
Sean Corfield added a comment -

:row-fn is intended for whole-row transformations - it is lazily mapped over the result set - so it's really meant for post-processing your data in a general sense while the connection is open. The protocol-based column reader is intended to transform specific types of data, independent of any application semantics (e.g., Oracle returns an oracle.sql.TIMESTAMP which can be automatically transformed into a java.sql.Timestamp or Date or whatever).

And :result-set-fn is intended for whole-result-set transformation, just prior to returning the result set.

Show
Sean Corfield added a comment - :row-fn is intended for whole-row transformations - it is lazily mapped over the result set - so it's really meant for post-processing your data in a general sense while the connection is open. The protocol-based column reader is intended to transform specific types of data, independent of any application semantics (e.g., Oracle returns an oracle.sql.TIMESTAMP which can be automatically transformed into a java.sql.Timestamp or Date or whatever). And :result-set-fn is intended for whole-result-set transformation, just prior to returning the result set.
Hide
Gary Fredericks added a comment -

Sounds good to me.

Show
Gary Fredericks added a comment - Sounds good to me.

People

Vote (0)
Watch (0)

Dates

  • Created:
    Updated:
    Resolved: