Nopes, that's not the job of EE containers, that's what Hibernate does. Hibernate does that perfectly well in standalone Java apps as well. As I said we manage our own EntityManagerFactory.

Have you looked at KeycloakServer inside the testsuite? You can spin up a perfectly functional KC server with nothing but an embedded Undertow server.

On 21 October 2015 at 21:08, Stan Silvert <ssilvert@redhat.com> wrote:
On 10/21/2015 2:43 PM, Stian Thorgersen wrote:
I have no idea what you mean about containers. As I said we manage our own EntityManagerFactory, etc.. inside Keycloak. It doesn't rely on JEE for that part.
Somebody has to process the annotations in org.keycloak.models.jpa.entities, do injection, interception, etc.  That's the job of the EE containers.



We just need the classes which we can get with jboss-modules.

On 21 October 2015 at 20:16, Stan Silvert <ssilvert@redhat.com> wrote:
On 10/21/2015 2:08 PM, Stan Silvert wrote:
On 10/21/2015 1:57 PM, Stian Thorgersen wrote:
We manage our own EntityManagerFactory and EntityManager as well as our own transactions. So that's not true.
If all you need is the datasource info that lives in standalone.xml then yes, we can get that.
But I'm a little confused as to how this would work.  Are you saying that you wouldn't use any of the classes in org.keycloak.models.jpa.entities?  Those need containers.


On 21 October 2015 at 19:53, Stan Silvert <ssilvert@redhat.com> wrote:
On 10/21/2015 1:23 PM, Stian Thorgersen wrote:
Guys - all we need is the datasource. I want to create a "db tool" for Keycloak, this is not for the Admin CLI

We don't need CDI, EJB, etc.. All we need is the datasource, or at least the connection information for the datasource + we also need JBoss modules so we can get the required classes.

If offline mode can do this then that'd be good, but I seem to remember datasources weren't available?
If you want to use our existing JPA infrastructure then you need a JPA container.  That's where this other stuff all gets pulled in.

Hey, let's just use JDBC!  :-)


On 21 October 2015 at 18:22, Marko Strukelj <mstrukel@redhat.com> wrote:
On Wed, Oct 21, 2015 at 5:57 PM, Stan Silvert <ssilvert@redhat.com> wrote:
On 10/21/2015 11:14 AM, Marko Strukelj wrote:
I haven't taken a very close look at Swarm yet, but I assumed you start Wildfly embedded in the same JVM as your Main class. If that is the case, then there should be no problem communicating with any kind of deployed component via heap directly - just lookup some singleton ...
Classloading constraints are what you usually run up against.  You can't use your own version of a class that was loaded from a different classloader.  I don't think Swarm helps you get around that, but just assumes you will access the WAR in the usual way through an HTTP port.  But I could be wrong as I haven't worked with Swarm either.

Here is an explanation of the problem based on an old version of JBoss:
https://docs.jboss.org/jbossas/docs/Server_Configuration_Guide/4/html/JBoss_JMX_Implementation_Architecture-Class_Loading_and_Types_in_Java.html

With jboss-modules, it's easier to get around these problems, but you still run into the isolation built into the container itself, especially in the case of a WAR.
 
CLI running in the same JVM as Wildfly would get bootstrapped through jboss-modules, and would package it's classes as a jboss module. It can then deploy additional 'in-container' logic that needs actual access to datasources via many different mechanisms. It can be a .jar containing a SLSB, a .war, a .sar, a POJO (via pojo subsystem), it can be a custom subsystem that gets installed ... In every of these cases it can then have access to resource objects bound to java:jboss JNDI space ... And in every of these cases it uses shared types loaded via dependencies on jboss-modules.



If that is not the case, then we would need some kind of interprocess communication going. With shell the roles of who connects where could also be reversed, and a started up Wildfly instance could have a service connecting out to local port bound by our CLI rather than the other way around.
I don't think the direction of the connection matters so much as the fact that you need a serialized format to issue commands to a foreign container.

Or, as I mentioned, you need the CLI to actually live inside the container.

CLI needs to be able to execute its logic inside the container in order to harness the datasources, but the UI part that takes care of getting the inputs and displaying the outputs - e.g. CraSH, does not have to be inside the container. 

I don't know what you mean by 'serialized format to issue commands to a foreign container', but if it means taking care of UI interaction, CraSH looks pretty decent CLI, easy to extend with custom commands. 







_______________________________________________
keycloak-dev mailing list
keycloak-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/keycloak-dev


_______________________________________________
keycloak-dev mailing list
keycloak-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/keycloak-dev