On 09/11/15 14:09, Stian Thorgersen
wrote:
I bet you meant 'Kubernetes' :-)
+1 for the improvements. Besides those I think that earlier or
later, we will need to solve long-running export+import where you
want to import 100.000 users.
As I mentioned in another mail few weeks ago, we can have:
1) Table with the progress (51.000 users already imported, around
49.000 remaining etc.)
2) Concurrency and dividing the work among cluster nodes (Node1 will
import 50.000 users and node2 another 50.000 users)
3) Failover (Import won't be completely broken if cluster node
crashes after import 90.000, but can continue on other cluster
nodes)
I think the stuff I did recently for pre-loading offline sessions at
startup could be reused for this stuff too and it can handle (2) and
(3) . Also it can handle parallel import triggered from more cluster
nodes.
For example: currently if you trigger kubernetes with 2 cluster
nodes, both nodes will start to import same file at the same time
because import triggered by node1 is not yet finished before node2
is started, so there is not yet existing DB record that file is
already imported. With the stuff I did, just the coordinator (node1)
will start the import . Node2 will wait until import triggered by
node1 is finished, but at the same time it can "help" to import some
users (pages) if coordinator asks him to do so. This impl is based
on infinispan distributed executor service
http://infinispan.org/docs/5.3.x/user_guide/user_guide.html#_distributed_execution_framework
.
Marek
_______________________________________________
keycloak-dev mailing list
keycloak-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/keycloak-dev