FieldInterceptorImpl.readObject() triggers proxy creation just to unwrap immediately
by Nikita Tovstoles
Per Hibernate Core docs, accessing a lazy property on a bytecode-instrumented Entity returns an unproxied target property. From what I can tell, however, in the case where the target property is an Entity as well, this is implemented by (in hibernate 3.3.1.GA):
-generating a proxy for target entity
-setting unwrap flag to true
-immediately unwrapping it.
In (both javassist and cglib) FieldInterceptorImpl:
public Object readObject(Object target, String name, Object oldValue) {
Object value = intercept( target, name, oldValue );
if (value instanceof HibernateProxy) {
LazyInitializer li = ( (HibernateProxy) value ).getHibernateLazyInitializer();
if ( li.isUnwrap() ) {
value = li.getImplementation();
}
}
return value;
}
In our test, The implicit call to DefaultLoadEventListener.createProxyIfNecessary() contributes 13% of exec time, 9% coming AbstractEntityPersister.createProxy(). (our tests execute read-only methods that hit 2nd level cache. For context, fetching actual data from 2nd level cache takes only 6% of exec time). In other words, proxy generation isn't cheap.
Given the stated intent to return an unproxied entity why not ensure that AbstractEntityPersister.initializeLazyProperty returns unproxied values? Better yet, why not introduce another version of EntityType.resolveIdentifier that bypasses proxy creation (via immediateLoad maybe)?
If there are some really good reason to create and immediately discard proxies in this case, createProxy() should be made cheaper. 1/3 of that method is spend doing ReflectHelper.overridesEquals(Class) in BasicLazyInitializer constructor. Any reason not to cache results of that method (and, actually, other ReflectHelper methods)?
I'm more than happy to write contribute the code given a little guidance.
Thanks
-nikita
15 years, 6 months
Problems using self-ref
by mailinglist@j-b-s.de
Hi all!
I tried to use a self referencing entity but unfortunately the deletion fails with:
java.lang.IllegalStateException: java.lang.IllegalStateException: org.hibernate.ObjectDeletedException: deleted object would be re-saved by cascade (remove deleted object from associations): [PInstrument#1]
Using google I only found some samples using HBM files but I want to use annotations. Is it necessary to declare both relations (manytoone and onetomany)? I tried this also but without success.
please see the hibernate model class below:
@Entity
@Table(name = "INSTRUMENT")
public class PInstrument implements Serializable
{
@Id
@Column(name = "INSTRUMENT_ID")
@GeneratedValue(strategy = GenerationType.AUTO, generator = "instrument_seq_gen")
@SequenceGenerator(name = "instrument_seq_gen", sequenceName = "INSTRUMENT__SEQ")
private Long _id;
@Column(name = "NAME", unique = true, nullable = false, length = 50)
@NotNull
@Length(min = 2)
private String _name;
@ManyToOne(fetch = FetchType.LAZY, cascade = CascadeType.ALL)
@OnDelete(action = org.hibernate.annotations.OnDeleteAction.CASCADE)
@Cascade({ org.hibernate.annotations.CascadeType.DELETE_ORPHAN, org.hibernate.annotations.CascadeType.ALL} )
@JoinColumn(name="FK_INSTRUMENT_ID")
@Index(name = "IDX_FK_MAIN_INSTRUMENT")
private PInstrument _mainInstrument;
.... more attributes and set/get methods
}
I even added all possible cascade annotations, but the issue still remains. Any hint is realy appreciated
Thanks in advance
jens
15 years, 7 months
Version 3.2.4.SP1
by Alexandros Karypidis
Hi all,
DISCLAIMER: I've just started looking into working on Hibernate in the
last 15 minutes.
Anyway, I was going through https://www.hibernate.org/422.html and set
up a JDK5/Maven209/Eclipse342 environment to get started. I built
http://anonsvn.jboss.org/repos/hibernate/core/tags/hibernate-3.3.1.GA/
successfully using these instructions so it seems I'm fine with the
foreplay. :-)
Now, I want to work with 3.2.4.sp1 (which is the version bundled with
JBoss 4.2.3). However, I can't find a suitable tag that has a pom.xml in
it. Both v324sp1 and Branch_3_2_4_SP1_CP02 don't seem to have one. Was
Maven adopted starting with 3.3 (i.e. there's no point in looking) or
have I missed something?
Cheers,
Alexandros
15 years, 7 months
RE: [hibernate-dev] Database Refresh Issue(Save and query)
by אלחנן מעין
Also i would recommen java saloon for additional orm forum
-----Original Message-----
From: Chris Bredesen <cbredesen(a)redhat.com>
Sent: ג 12 מאי 2009 15:47
To: sridhar veerappan <sriasarch(a)gmail.com>
Cc: hibernate-dev(a)lists.jboss.org
Subject: Re: [hibernate-dev] Database Refresh Issue(Save and query)
Please post this issue on the user forum. This list is for discussion
about the development of Hibernate itself.
-Chris
sridhar veerappan wrote:
> Hi,
> I am using hibernate 1.2, when I save the data it is getting save in the
> database(save) , but immediatly i am(query) checking for the updated
> data, I am not getting the refreshed data, getting the old data. After
> 3-10 seconds, it is getting the new/updated data .
>
> How to solve this issue.
>
> Code snippet:
> Save:
> sess = HibernateConfigurator.getSessionFactory().openSession();
> sess.saveOrUpdate(p_obj);
> sess.flush();
>
> Query:
>
> sess = HibernateConfigurator.getSessionFactory().openSession();
> List result = sess.find(p_query);
>
>
>
> Thanks in Advance
> Sridhar
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> hibernate-dev mailing list
> hibernate-dev(a)lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/hibernate-dev
_______________________________________________
hibernate-dev mailing list
hibernate-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hibernate-dev
15 years, 7 months
Database Refresh Issue(Save and query)
by sridhar veerappan
Hi,
I am using hibernate 1.2, when I save the data it is getting save in the
database(save) , but immediatly i am(query) checking for the updated data, I
am not getting the refreshed data, getting the old data. After 3-10 seconds,
it is getting the new/updated data .
How to solve this issue.
Code snippet:
Save:
sess = HibernateConfigurator.getSessionFactory().openSession();
sess.saveOrUpdate(p_obj);
sess.flush();
Query:
sess = HibernateConfigurator.getSessionFactory().openSession();
List result = sess.find(p_query);
Thanks in Advance
Sridhar
15 years, 7 months
Search: Dynamic Document boosting
by Sanne Grinovero
Hello,
I'm currently needing to be able to define a per-entity INSTANCE
different Boost, not just the type.
Currently I could obtain this functionality by using a custom
classbridge, but the entity is quite complex and building my own
classbridge I will have to map all fields myself loosing the
flexibility of annotations for the current type and
all @IndexedEmbedded.
I'd like to add a new parameter to @Boost; currently it has a float
mandatory value,
I'd like to add a new parameter:
Class<BoostScorer> impl();
where BoostScorer is an interface having something like
public float score(Object value);
The annotation would default to an implementation returning constant 1.0f.
This would have an interaction with the existing "value" parameter:
IMHO they should
multiply each other, so I'd change the existing value to also have a
default of 1.0f
and people might want to change one or both values.
Setting both values might be useful to reuse the same impl on
different types/fields
and still be able to statically scale the result from the
score(Object) function, without having to
rewrite a new implementation.
What do you think?
Sanne
15 years, 7 months
Similarity approach in Hibernate Search
by Sanne Grinovero
Each time a new Document is added to the index,
the similarity relevant to that entity is looked up from the
pertaining documentBuilder
and set to the indexwriter:
AddWorkDelegate:
Similarity similarity = documentBuilder.getSimilarity();
writer.setSimilarity( similarity );
writer.addDocument( work.getDocument(), analyzer );
So the analyzer is scoped per document, the similarity is globally set
on the indexwriter.
Does this make sense to update the similarity for each add operation type?
This is a problem as I can't use two (more) threads to add documents to
the same index.
Is there a good use case for which someone might need a different
Similarity implementation
for different entities contained in the same index?
I'd like to change that to an "illegal configuration".
Sanne
15 years, 7 months
Hibernate Search and massive indexing
by Emmanuel Bernard
I have synced with Sanne on his work on massive reindexing and here is
the outcome of the discussion.
1. An exclusive batch mode is a mode where a node has exclusive access
to the index and can optimize writings (not flushing, not committing
at specific times etc).
2. The node able to activate the exclusive batch has to be the master
in a cluster (ie not the slaves).
3. The master will have two modes, a transactional mode (as today, ie
commit at tx boundaries - potentially asyned) and an exclusive batch
mode called the adaptative mode.
In this mode the BackendQueryProcessor can take some freedom in when
and how it flushes changes to the Lucene index and when and how it
commits.
One approach would be to be transactional (ie one queue of changes =
one commit) for low thresholds and batch exclusive for higher
threshold (apply several queues of changes before flushing or committing
this back end would somehow communicate with the master copy process
to only copy changes at the right time. (I think it should work well
already but needs to be verified).
A slave / client could force a commit by sending a Commit LuceneWork
if needed.
4. There should be a way to switch at runtime from the tx mode to the
adaptative mode. When switching, the tx queue is forked, new elements
are queued, old elements are processed.
When the old queue is emptied, the adaptative mode kicks in.
5. On top of that, the massive indexer API, reads data from the
database as fast as possible and push index works to the adaptative
engine. This API will be mono server but multi thread for now. Sanne
can describe that more in details.
This api woudl have a start and waitTillDone() API that starts the
adaptative engine and stops it.
That's it for the first step.
Second steps (no particular order)
6. Make the massive indexer API work in a cluster.
Slaves would read the DB and push index works to the queue
7. find a way to apply analyzing before the actual IndexWriter usage
That would allow to increase the index parallelism by allowing some
pipelining.
Or even better to analyze on the slaves and free cpu time for the
master (would work nice with 6)
Sanne please add anything I have missed, misinterpreted.
15 years, 7 months
Hibernate Core 3.5 and Bean Validation integrated
by Emmanuel Bernard
I've added support for:
- validation modes
- custom groups
- provided ValidatorFactory
- automatic VF creation if needed
- raise of the ConstraintViolationException
- custom TraversableResolver that ignores associations
- DDL statement generation
Please test out, in trunk for all.
15 years, 7 months