| Marko Bekhta if you're looking for issues to solve, I think this one might be interesting for you. There are 2 reasons why we would need this cache:
- we need to support external @Valid on any container type
- we are not sure the VE candidates optimizations we introduced is right
See below for more details about both cases.
The first item is related to this issue: https://hibernate.atlassian.net/browse/HV-1481 . Namely, if we have:
@Valid
private T someContainer;
I worked around the issue when T is a List/Map/Set to fix
HV-1481 Resolved as it was a clear regression but the issue is the same for any type of containers. If we want to support it for any type of containers, we will need the cache, otherwise it will be slow as hell. If possible, this cache should be shared in the regular case and the cascading case. E.g. we should be able to get the VE from the cache given a container type and a type argument. As for the "candidate VEs" thing we introduced, in the end, we are wondering if it can work properly. One tricky case is the following:
@Valid
private List<String> someContainer;
and then you have MyContainer which extends List<String> but also extends a totally different container. Following BV, it should throw an error as you have 2 different VE that can be applied. But I think that currently, it will just use the List VE and not throw an error. At least, we should check that. So basically, there are several things here:
- introduce a VE resolver cache and use it every time we have to find the VE for a given type/type argument
- benchmark if it introduces some slowdown using the JMH test case: in the simple bean validation test, we shouldn't see it as it's just a startup cost, in the cascaded case, we should see something, hopefully an improvement, as we would only have to deal with the candidate VEs once
- see how
HV-1481 Resolved can be generalized to any container type while reducing the overhead to the minimum - we might have to also track the non container types in the cache to do so (e.g. know that String is not a container for instance, otherwise we will have to check every time) .
- try to see if the candidate VE optimization we introduced can be kept or is a no-no. And if it can't be kept, we should remove it. Fortunately, with the above work, it should have a very low impact as we would have the cache.
Note that introducing even one more map lookup or object creation might introduce a noticeable slowdown so we will need to be very careful in the cascading case. We probably need several different issues to track all this but I think it's better to have the general overview at one place. Please tell me if you're interested in this one and if my explanations are clear enough. If so I will help you get on track! |