We have an issue similar to the one in the "deletion execution schedule
EVENT_ENTITY" thread from Sep 2015.
Because of extensive tests our table event_entity (Oracle 12) is large - ~10M rows.
The scheduled task org.keycloak.services.scheduled.ClearExpiredEvents is run quite often
by Keycloak and performs the following SQL:
delete from EVENT_ENTITY where REALM_ID=? and EVENT_TIME<?
The post "deletion execution schedule EVENT_ENTITY" from Sep 2015 states that
this is run every 15 minutes but according to our observations it is run much more often:
In our 3 node Keycloak cluster the Oracle trace shows that this statement is run more than
once per minute and causes Oracle to perform a full table scan each time wich takes ~ 1
minute.
This is completely counter-productive as at every single point in time there is at least
one full table scan running - just for the sake of keeping the event_entity table
"clean".
Compared to the purpose the resource consumption on the Oracle server (CPU, disks, ..) is
very high and in our humble opinion completely out of proportion.
Another adverse consequence is that the table is permanently locked by these deletes
making any DDLs on the table difficult.
Creating the following index
create index keycloak.some_index_name on keycloak.event_entity(event_time,realm_id)
tablespace ...
works for us - it eliminates the full table scan. Of course there is a slight performance
penalty as the index updates (both on inserts and on deletes) require a certain amount of
resources.
Our question is: is this the optimal solution to the problem?
And: is it a "legal" solution? We have manipulated the keycloak database schema
that was created by Keycloak and that is supposed to be under its exclusive control?
What if during a future upgrade Keycloak tries to create an equivalent index (which would
lead to an error)?
Currently as created by Keycloak, the table event_entity has only a primary key, no other
indexes.
Thanks in advance for your thougts
Peter