I just received notification about the following comment as it seemed
I voted for this issue myself years ago :)
It's sad that we don't have time to apply patches provided by someone
else with best intentions to help, shouldn't we at least reply
Of course that involves myself too, just that I don't feel that
comfortable in evaluating patches for the ORM module.. since this is
long pending I'm assuming someone already looked at it and didn't like
I noticed that hibernate excludes unlisted classes even if
*<exclude-unlisted-classes>* is set to *false*.
Here is the text from persistence-2_0-final-spec.pdf:
/188.8.131.52 mapping-file, jar-file, class, exclude-unlisted-classes
The following classes must be implicitly or explicitly denoted as
managed persistence classes to be
included within a persistence unit: entity classes; embeddable classes;
The set of managed persistence classes that are managed by a persistence
unit is defined by using one or
more of the following:
. Annotated managed persistence classes contained in the root of the
persistence unit (*unless the
exclude-unlisted-classes element is specified*) /
/184.108.40.206.1 Annotated Classes in the Root of the Persistence Unit
All classes contained in the root of the persistence unit are searched
for annotated managed persistence
classes---classes with the
MappedSuperclass annotation---and any
mapping metadata annotations found on these classes will be processed,
or they will be mapped using
the mapping annotation defaults. If it is not intended that the
annotated persistence classes contained in
the root of the persistence unit be included in the persistence unit, the
*exclude-unlisted-classes element must be specified as **true*. The
exclude-unlisted-classes element is not intended for use in Java SE
Does this mean you don't support specification in this place?
Here is a link to our jira issue https://issues.jboss.org/browse/JBIDE-11773
I was working a pull request where the contributor had done multiple
merges of master into its topic branch during development. So we had
many merge commits nestled in his pull request.
The awesome thing I learned about Git today (courtesy of the folks on
#github) is that rebasing that will weed out the merge commits! So
amazingly simple. Basically, following my normal pull request
process, just after fetching their work and switching to that branch
I did `rebase master` and that got rid of all the merge commits :)
If you happen to find yourself in the same boat at some point...
Dear mailing users,
I am new learner for hibernate and I have write a simple program to persist the data in the Database Mysql.
I have done all the required to run the program I have prepared a POJO and hibernate.cfg.xml .
But when I am going to run the program I am facing the following error in Eclipse.
Please help me to resolve the error.
I have also included jar (slf4j-api-1.6.1.jar) in the classpath.
Thanks in advance.
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Exception in thread "main" org.hibernate.MappingException: Unable to load class [ org.praveen.dto.UserDetais] declared in Hibernate configuration <mapping/> entry
Caused by: java.lang.ClassNotFoundException: org.praveen.dto.UserDetais
at java.security.AccessController.doPrivileged(Native Method)
at java.lang.Class.forName0(Native Method)
... 6 more
I'm starting the translation of the documentation in french and I've seen
the "copyright year" into the ogm.ent file is set to 2011.
Should I change it ? If yes should I take it into OGM-157 (specific to the
translation) or should I open a new JIRA about that ?
First of all, thank you for coming up with the idea & implementing
OGM. For quite some time I was thinking of using JPA annotations /
semantics to drive different NoSQL stores but the whole idea of
re-implementing the JPA machinery was really scary. Now we don't need
to do this anymore as we've got OGM :-)
For the few past days I was looking at the
org.hibernate.ogm.dialect.GridDialect interface (as well as at the
existing Map, Infinispan and Ehcache implementations) and it looks
like it is very easy to implement non-transactional behavior (I mean -
persistence of tuples and associations is really straightforward).
What I was struggling with thought is making a NoSQL store aware of
the JTA-transaction demarcation. What I would like to achieve is to
start a transaction in the underlying store (on transaction begin) and
commit/roll it back inside data store when the JTA transaction is
committed / rolled-back.
Looking at the existing implementations it wasn't easy to figure out
how to achieve this. Moreover I've bumped into this discussion:
where Emmanuel provided his insight: "Form this discussion it also
seems that we might need to have datastores and
dialect implement the Hibernate transaction object so that the datastore can
properly demarcate when isolation starts and when it ends. But that's clearly
not abstracted yet in Hibernate OGM."
In the end my question is rather simple (although answer might be
not...): what would be OGM-way to start / commit a transaction in the
underlying data store in response to JTA transaction events? Or am I
asking totaly wrong question and I should be taking a different
approach? Would be grateful for any insight.
After many weeks of work we finally have integrated the work done by Guillaume, Alan and Oliver back to master.
To run the mongodb module, make sure to activate the mongodb profile
mvn clean install -Pmongodb
If like me you have installed MongoDB in a different host / port, you can use the following environment variables
mvn clean install -Pmongodb
Many thanks guys. Not everything is done and polished (I have left OGM-132 and OGM-147 opened for that) but that will simplify a lot future works.
In http://docs.jboss.org/hibernate/core/4.0/hem/en-US/html_single/, I
see the following description of shared-cache-mode. I think the default
is described incorrectly. I'll create a jira for this but wanted to see
if others agree with me.
By default, entities are elected for second-level cache if annotated
with @Cacheable. You can however:
ALL: force caching for all entities
NONE: disable caching for all entities (useful to take second-level
cache out of the equation)
ENABLE_SELECTIVE (default): enable caching when explicitly marked
DISABLE_SELECTIVE: enable caching unless explicitly marked as
@Cacheable(false) (not recommended)
See Hibernate Annotation's documentation for more details.
Shared-cache-mode looks correct to me in the Hibernate developer guide
(see section 6.2.1. Configuring your cache providers in
Strong suggested I fwd this to the mailing-list.With reference to mail below could any of you clarify/explain if this is expected behaviour of NaturalIdLoadAccess with 2lcache as I find some discrepancy.
Thanks and Regards,
JBoss EAP QE Team
Red Hat Brno
----- Forwarded Message -----
From: "Madhumita Sadhukhan" <msadhukh(a)redhat.com>
To: "Strong Liu" <stliu(a)redhat.com>
Sent: Friday, April 27, 2012 12:19:18 AM
Subject: NaturalIdLoadAccess behaviour is this expected?
I notice a strange behaviour while loading natural id with secondlevel cache enabled.
Not sure if this is expected behaviour but I notice some discrepancies which I would like to clarify.
I am not yet uploading this test to my AS7 branch on github as AS7 is still stuck on hibernate4.1.2 and this test requires 4.1.3 from EAP 6 tested currently to work.
Please paste the attached unzipped folder in AS testsuite folder structure in location:
Please run with hibernate 4.1.3 jars replaced in modules/org/hibernate..... within build of AS7
In my test I have tried to load an entity with NaturalIdLoadAccess while 2lcache is enabled using two natural ids(firstname and voterid) in several steps.
1) create person(with natural ids firstname and voterid)
2) load using natural ids first time
3) modify/update natural ids in database(in order to not touch 2lcache)
4) load using loader from 2Lcache using old natural id values in step 2---//this works correctly as expected and is able to load values from cache though it has been modified in DB.
5) then load using loader from (2) but using updated values of firstname and voterid(naturalids) used in step 3
//this is where it breaks the Person object returned from this load still shows older value of firstname as in step 2 but wondering how the loader works as I have passed the new values in using(....)
also the next step fails so wondering where the flaw is!
6) try to use same loader and load using older values of natural id this is to recheck if older value is still persisting in cache as it was returned in step5
//this returns null pointer showing it does not exist in cache and loader is unable to load it!
7) If i try again with the new actual values the loader is able to load the person entity again but with old values!!!
So the problem/confusion is due to the discrepancy between step 5 and step 6 (with the older value of entity loaded in step5 indicating it is still cached but then with the failure to load it in step 6 throwing nullpointer which indicates its not)
Also I am surprised at how using() function behaves on the loader i.e it returns the entity with older values while loading "using()" the updated values!(as confirmed in step 7)
Please note that the testcase will pass as I replaced the asserts with S.O.Ps hence you should check for the values in server log excerpt(attaching for ease)
Could you take a quick look please and explain if this is expected behaviour?
Thanks and Regards,
JBoss EAP QE Team
Red Hat Brno