Skip to main content

Spring - When to use jdbc versus O/R Mapping

What are the factors that determine when you choose different data access technologies? The access choices would then be straight JDBC, Spring's JDBC framework, iBatis, and Hibernate or JDO. The straight JDBC solution would, in our opinion, be the solution of choice only when you are not allowed to use any framework besides what is delivered in J2SE or J2EE.

If your project has only a few persistent classes or you have to map to an existing database with several stored procedures, then a Spring JDBC solution makes sense. There is very little to configure and if you have only a few classes to map to a Java class, then the MappingSQLQueyr makes mapping straightforward. The Storedprocedure class makes working with stored procedures easy.


If you have many classes that map to an existing database or you don't have control over the database design, then you have to look at the mapping options between the tables and your java classes.


When to Choose O/R Mapping
O/R mapping can have many benefits, but it is important to remember that not every application fits the O/R mapping paradigm.

Central issues are heavy use of set access and aggregate functions, and batch updates of many rows. If an application is mainly concerned with either of those — for example, a reporting application — and does not allow for a significant amount of caching in an object mapper, set-based relational access via Spring JDBC or iBATIS SQL Maps is probably the best choice.

Because all O/R mapping frameworks have a learning curve and setup cost, applications with very simple data access requirements are also often best to stick with JDBC-based solutions. Of course, if a team is already proficient with a particular O/R mapping framework, this concern may be less important.

Indicators that O/R mapping is appropriate are:

A typical load/edit/store workflow for domain objects: for example, load a product record, edit it, and synchronize the updated state with the database.

Objects may be possibly queried for in large sets but are updated and deleted individually.

A significant number of objects lend themselves to being cached aggressively (a "read-mostly" scenario, common in web applications).

There is a sufficiently natural mapping between domain objects and database tables and fields. This is, of course, not always easy to judge up front. Database views and triggers can sometimes be used to bridge the gap between the OO model and relational schema.

There are no unusual requirements in terms of custom SQL optimizations. Good O/R mapping solutions can issue efficient SQL in many cases, as with Hibernate's "dialect" support, but some SQL optimizations can be done only via a wholly relational paradigm

Comments

Popular posts from this blog

Quicksort implementation by using Java

 source: http://www.algolist.net/Algorithms/Sorting/Quicksort. The divide-and-conquer strategy is used in quicksort. Below the recursion step is described: 1st: Choose a pivot value. We take the value of the middle element as pivot value, but it can be any value(e.g. some people would like to pick the first element and do the exchange in the end) 2nd: Partition. Rearrange elements in such a way, that all elements which are lesser than the pivot go to the left part of the array and all elements greater than the pivot, go to the right part of the array. Values equal to the pivot can stay in any part of the array. Apply quicksort algorithm recursively to the left and the right parts - the previous pivot element excluded! Partition algorithm in detail: There are two indices i and j and at the very beginning of the partition algorithm i points to the first element in the array and j points to the last one. Then algorithm moves i forward, until an element with value greater or equal

Live - solving the jasper report out of memory and high cpu usage problems

I still can not find the solution. So I summary all the things and tell my boss about it. If any one knows the solution, please let me know. Symptom: 1.        The JVM became Out of memory when creating big consumption report 2.        Those JRTemplateElement-instances is still there occupied even if I logged out the system Reason:         1. There is a large number of JRTemplateElement-instances cached in the memory 2.     The clearobjects() method in ReportThread class has not been triggered when logging out Action I tried:      About the Virtualizer: 1.     Replacing the JRSwapFileVirtualizer with JRFileVirtualizer 2.     Not use any FileVirtualizer for cache the report in the hard disk Result: The japserreport still creating the a large number of JRTemplateElement-instances in the memory        About the work around below,      I tried: item 3(in below work around list) – result: it helps to reduce  the size of the JRTemplateElement Object        

Stretch a row if data overflows in jasper reports

It is very common that some columns of the report need to stretch to show all the content in that column. But  if you just specify the property " stretch with overflow' to that column(we called text field in jasper report world) , it will just stretch that column and won't change other columns, so the row could be ridiculous. Haven't find the solution from internet yet. So I just review the properties in iReport one by one and find two useful properties(the bold  highlighted in example below) which resolve the problems.   example: <band height="20" splitType="Stretch" > <textField isStretchWithOverflow="true" pattern="" isBlankWhenNull="true"> <reportElement stretchType="RelativeToTallestObject" mode="Opaque" x="192" y="0" width="183" height="20"/> <box leftPadding="2"> <pen lineWidth="0.25"/>