On 10/02/2012 03:00 PM, Sanne Grinovero wrote:
>> # DefaultEntityAliases.intern
>> Why are we interning this?
>> org.hibernate.loader.DefaultEntityAliases.intern(String[])
>> If they are constants, I guess they could be reused rather than
>> re-computed all the time.. in fact, see next point:
>>
>> # Aliases and other string generations
>> It seems Hibernate ORM is generating property aliases and other little
>> Strings all the time; this test isn't using any HQL, just direct
>> loading of entities via primary keys or by following relations from
>> other managed entities.
>
>
> Isn't there a point where interning becomes wasteful? Surely there must be.
> What is that tipping point?
I'm not suggesting to use interning; what surprised me is that even
for standard load operations of an entity, Hibernate doesn't have a
"prepared" set of alias names; couldn't the metamodel for each entity
contain pre-computed alias names (or full SQL strings) to do the basic
CRUD operations on that entity?
Not being familiar with this code, I just found it odd that to load
the same entity type some million times it would recompute these alias
strings over and over.
Hibernate caches the complete SQL strings for load-by-id, insert, etc.
interning those would be pointless (its very unlikely you will see those
same strings elsewhere in the VM.
What happens in DefaultEntityAliases is a little bit different. These
are user-supplied column names in result set mappings. The values being
interned usually are supplied by the user. TBH, I don't think I added
the intern() calls. So I am not the best one to explain why it was
added. Just pointing out the difference.
I think longer term as we move to an AST based approach for SQL
generation we have much more opportunity for interning and caching if we
wish.
--
steve(a)hibernate.org
http://hibernate.org