[JBoss JIRA] (JGRP-2239) AUTH + ASYM_ENCRYPT causes problem with re-joining cluster (MERGE)
by Boris Sh (JIRA)
[ https://issues.jboss.org/browse/JGRP-2239?page=com.atlassian.jira.plugin.... ]
Boris Sh updated JGRP-2239:
---------------------------
Steps to Reproduce:
It was not easy for me to reproduce this problem on linux prod server, so what I did on my Windows dev machine:
1. Deployed 3 nodes (Not needed to have all 7 for test, 3 is enough. And I couldn't reproduce the issue on 2 nodes in cluster).
2. Started "CPU Stress" util with maximum thread priorities and activity for all CPU cores on my PC for 3-5 minutes.
3. Stopped "CPU Stress".
4. See the problem in logs: one node has left the cluster, and after "merge" request cluster became look a bit strange - now has 2 subgroups. And kicked node can't communicate with any other node, log is fulfilled with error messages like "unrecognized cipher".
was:
It was not easy for me to reproduce with problem on linux prod server, so what I did on my Windows dev machine:
1. Deployed 3 nodes (Not needed to have all 7 for test, 3 is enough. And I couldn't reproduce the issue on 2 nodes in cluster).
2. Started "CPU Stress" util with maximum thread priorities and activity for all CPU cores on my PC for 3-5 minutes.
3. Stopped "CPU Stress".
4. See the problem in logs: one node has left the cluster, and after "merge" request cluster became look a bit strange - now has 2 subgroups. And kicked node can't communicate with any other node, log is fulfilled with error messages like "unrecognized cipher".
> AUTH + ASYM_ENCRYPT causes problem with re-joining cluster (MERGE)
> ------------------------------------------------------------------
>
> Key: JGRP-2239
> URL: https://issues.jboss.org/browse/JGRP-2239
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.6
> Environment: Infinispan 9.1.1 + JGroups 4.0.6.Final + Vert.x 3.5.0
> Reporter: Boris Sh
> Assignee: Bela Ban
>
> Hello,
> I am using the following configuration:
> {code:java}
> <config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> xmlns="urn:org:jgroups" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd">
> <UDP />
> <PING />
> <MERGE3 />
> <FD />
> <VERIFY_SUSPECT />
> <ASYM_ENCRYPT encrypt_entire_message="true" sym_keylength="128"
> sym_algorithm="AES/ECB/PKCS5Padding" asym_keylength="2048"
> asym_algorithm="RSA" />
> <pbcast.NAKACK2 />
> <UNICAST3 />
> <pbcast.STABLE />
> <FRAG2 />
> <AUTH auth_class="org.jgroups.auth.X509Token" auth_value="auth"
> keystore_path="keystore.jks" keystore_password="pwd" cert_alias="alias"
> cipher_type="RSA" />
> <pbcast.GMS />
> </config>
> {code}
> I have 7 services, but will try to show logs for 2 ones, coordinator and some random node, and all the other nodes behave similarly.
> Initially, when these nodes join the cluster, everything is fine.
> The server is a shared machine with slow CPU and also slow HDD, so sometimes, when other applications are busy with their tasks, whole my cluster can get frozen for 3-5 minutes. During/in the end of this freeze, some service may tell me the following (in logs):
> {code:java}
> org.jgroups.protocols.FD up
> WARNING: node-26978: I was suspected by node-27291; ignoring the SUSPECT message and sending back a HEARTBEAT_ACK
> WARNING: node-26978: unrecognized cipher; discarding message from node-27291
> org.jgroups.protocols.Encrypt handleEncryptedMessage
> WARNING: node-26978: unrecognized cipher; discarding message from node-27291
> org.jgroups.protocols.Encrypt handleEncryptedMessage
> WARNING: node-26978: unrecognized cipher; discarding message from node-36734
> org.jgroups.protocols.Encrypt handleEncryptedMessage
> {code}
> so the node was kicked out from the cluster, as it became "suspect", but the node doesn't agree with that fact. Cluster coordinator has already changed sym private key, so in the further logs of this server I see "unrecognized cipher".
> In cluster coordinator logs I see the following:
> {code:java}
> INFO: ISPN100000: Node node-26978 joined the cluster
> ****
> WARN: node-27291: unrecognized cipher; discarding message from node-26978
> org.jgroups.logging.Slf4jLogImpl error
> ERROR: key requester node-26978 is not in current view [***]; ignoring key request
> org.jgroups.logging.Slf4jLogImpl warn
> WARN: node-27291: unrecognized cipher; discarding message from node-26978
> INFO: ISPN000093: Received new, MERGED cluster view for channel ISPN: MergeView::[node-26978|8] (7) [node-26978, node-12721, node-17625, node-45936, node-56674, node-36734, node-27291], 2 subgroups: [node-27291|7] (6) [node-27291, node-12721, node-17625, node-45936, node-56674, node-36734], [node-27291|6] (7) [node-27291, node-26978, node-12721, node-17625, node-45936, node-56674, node-36734]
> {code}
> My understanding of what has happened:
> For example I have 3 nodes {A, B, C} in the cluster. The cluster gets frozen for some minutes, so node {C} becomes suspected, and kicked out from the cluster by coordinator. For some reason {C} ignores that fact. Later, after cluster is up again, it becomes ignoring messages from {C}, because it is using ASYM encryption and private key has been re-generated by coordinator. Also, for some reason MERGE operation doesn't work, and {C} can not join back to cluster, and now cluster has 2 subgroups, that don't communicate to each other, and I don't fully understand why this happens.
> How I temporary resolved this issue: changed ASYM_ENCRYPT to SYM_ENCRYPT, and now any node can come back to the cluster successfully after freeze, as the key doesn't change.
> Also, I didn't test, but think change_key_on_leave="false" will help, but this is not the way I want to use.
> So looks like this a problem with AUTH + ASYM_ENCRYPT protocol combination, when node in some cases can not rejoin the cluster.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (WFLY-9588) Investigate on splitting as transaction module to not be gathering all dependencies
by Ondra Chaloupka (JIRA)
[ https://issues.jboss.org/browse/WFLY-9588?page=com.atlassian.jira.plugin.... ]
Ondra Chaloupka commented on WFLY-9588:
---------------------------------------
not sure if directly block WFLY-9455 but making easier more granular dependency graph could help a lot in solving it
> Investigate on splitting as transaction module to not be gathering all dependencies
> -----------------------------------------------------------------------------------
>
> Key: WFLY-9588
> URL: https://issues.jboss.org/browse/WFLY-9588
> Project: WildFly
> Issue Type: Task
> Components: Transactions
> Affects Versions: 11.0.0.Final
> Reporter: Ondra Chaloupka
> Assignee: Ondra Chaloupka
> Attachments: eap71-txn-dependencies.jpg
>
>
> Current org.jboss.as.transaction module is heavy lifting and has many dependencies and lot of work competencies. E.g. org.jboss.jts module depends on that too.
> It should be investigated how to disentangle all the dependencies.
> Plus this could be beneficiary for WildFly Swarm if we are able to split the dependency chains, maybe not using one big jar in the transaction module but provide more granularity. Needs to be investigated.
> See attached image for a view of dependencies.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (WFLY-9588) Investigate on splitting as transaction module to not be gathering all dependencies
by Ondra Chaloupka (JIRA)
[ https://issues.jboss.org/browse/WFLY-9588?page=com.atlassian.jira.plugin.... ]
Ondra Chaloupka edited comment on WFLY-9588 at 12/4/17 4:19 AM:
----------------------------------------------------------------
not sure if directly block WFLY-9455 but making easier more granular dependency graph could help a lot in solving it
was (Author: ochaloup):
not sure if directly block WFLY-9455 but making easier more granular dependency graph could help a lot in solving it
> Investigate on splitting as transaction module to not be gathering all dependencies
> -----------------------------------------------------------------------------------
>
> Key: WFLY-9588
> URL: https://issues.jboss.org/browse/WFLY-9588
> Project: WildFly
> Issue Type: Task
> Components: Transactions
> Affects Versions: 11.0.0.Final
> Reporter: Ondra Chaloupka
> Assignee: Ondra Chaloupka
> Attachments: eap71-txn-dependencies.jpg
>
>
> Current org.jboss.as.transaction module is heavy lifting and has many dependencies and lot of work competencies. E.g. org.jboss.jts module depends on that too.
> It should be investigated how to disentangle all the dependencies.
> Plus this could be beneficiary for WildFly Swarm if we are able to split the dependency chains, maybe not using one big jar in the transaction module but provide more granularity. Needs to be investigated.
> See attached image for a view of dependencies.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (WFLY-9587) Investigate and introduce capabilities for the transactional subsytems transations/xts/rts
by Ondra Chaloupka (JIRA)
[ https://issues.jboss.org/browse/WFLY-9587?page=com.atlassian.jira.plugin.... ]
Ondra Chaloupka commented on WFLY-9587:
---------------------------------------
[~brian.stansberry]: Hi Brian,
thanks a lot! That's a great summary and guideline for me to work with. I hope as I will be able I will get to start with looking into it. This issue came from a discussion on our f2f meeting and it's still a long-term goal. I expect first to check out the dependency labyrinth represented by issue WFLY-9588.
To react at the last part of the 3) - I wonder why the expression is permitted on jts element. Was not that rather just a mistake? Using expressions for jts sounds pretty as extreme corner case and breaking compatibility not sounding so bad (at least to me).
.
> Investigate and introduce capabilities for the transactional subsytems transations/xts/rts
> ------------------------------------------------------------------------------------------
>
> Key: WFLY-9587
> URL: https://issues.jboss.org/browse/WFLY-9587
> Project: WildFly
> Issue Type: Feature Request
> Components: Transactions, XTS
> Affects Versions: 11.0.0.Final
> Reporter: Ondra Chaloupka
> Assignee: Ondra Chaloupka
>
> The notion of capability is not introduced to the transactional system. The WFLY integration expecting subsystem providing it. The transaction subsystems should support so.
> https://docs.jboss.org/author/display/WFLY/Working+with+WildFly+Capabilities
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (WFLY-9542) Iron JAC Amar is being told to create more KeyConnectionAssociation under CachedConnectionManagerImpl then actual JTA transactions open (Reproducible with NESTED EJB calls)
by Nuno Godinho de Matos (JIRA)
[ https://issues.jboss.org/browse/WFLY-9542?page=com.atlassian.jira.plugin.... ]
Nuno Godinho de Matos commented on WFLY-9542:
---------------------------------------------
Would it be possible to get a bit of direction on where patching should ideally applied to tackle this issue?
Many thanks for your understanding,
Kindest regards.
> Iron JAC Amar is being told to create more KeyConnectionAssociation under CachedConnectionManagerImpl then actual JTA transactions open (Reproducible with NESTED EJB calls)
> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: WFLY-9542
> URL: https://issues.jboss.org/browse/WFLY-9542
> Project: WildFly
> Issue Type: Bug
> Components: EJB
> Affects Versions: 10.1.0.Final
> Reporter: Nuno Godinho de Matos
> Labels: Arjuna, IronJacamar, JTA
>
> I have been trying to understand for what reason we sometimes are seing the exception:
> {panel}
> IJ000311: Throwable from unregister connection: java.lang.IllegalStateException: IJ000152: Trying to return an unknown connection: org.jboss.jca.adapters.jdbc.jdk7.WrappedConnectionJDK7@2021ff6.
> {panel}
> I have now created a sample application that allows the exception to be reproduced:
> https://github.com/99sono/wildlfly-ironjacamar-unregisterconnection/
> To summarize:
> If we start a JTA transaction T1 by invoking an @EJB with TransacitonAttirbute RequiresNew.
> And then this EJB calls a second EJB with TransacitonAttirbute Supports.
> Then we will have the problem that if the First JDBC access is done on the nested EJB, the connection handed over to the JPA layer (eclipselink) will be popped out of the
> org.jboss.jca.core.connectionmanager.ccm.CachedConnectionManagerImpl.currentObjects
> As soon as we come out of the Nested EJB call.
> I do not know how serious this is.
> If somehow the popped out connection continues in fact locked to the ongoing JTA transaction or if might be going back to the connection pool and potentially be up for grabs.
> This I do not know how... I lack the knowledge to say how crucial it is to have a good synchrinization between the connections being managed under the:
> org.jboss.jca.core.connectionmanager.ccm.CachedConnectionManagerImpl
> And the JPA layer.
> But intuitively, I would say: quite important.
> So whenever a Stack gets popped with connections being managed, I would intuitively say: one of the parties has done something wrong. Either:
> (a) ARJUNA is asking JAC AMAR to open stacks when the JTA transaction is the same and it shouldn't, or
> (b) It does not really matter that the conneciton is popped out - somehow JAC AMAR still knows that the connection it popped out is still bound to the JTA transaction and the problem is logically strange but of low impact...
> What appears to be true, is that eclipselink is doing something logicla, in the sense that it is only asking for the conneciton to be closed/unregistered at the end of the JTA transaction.
> NOTE:
> In fact the error that we see when we get the stack trace, could also happen and no stack trance be logged by jac AMAR.
> This is the scenario when JAC leaves the unregister conneciton via:
> org.jboss.jca.core.connectionmanager.ccm.CachedConnectionManagerImpl would be going out via the:
> {panel}
> CopyOnWriteArrayList<ConnectionRecord> conns = cmToConnectionsMap.get(cm);
> // Can happen if connections are "passed" between contexts
> if (conns == null)
> return;
> {panel}
> But in my oppinion this exit path deserves the exact same stack trace.
> The sample application is to be found here:
> https://github.com/99sono/wildlfly-ironjacamar-unregisterconnection/
> This sample application is not as simple to deploy because one needs to hava a data source configured.
> In any case the sample application has in src/test/resources my standalone.xml.
> It should be realtively easy to fix the JDBC drivers configured in the atnadlone.xml.
> And it should be relatively simple to fix the persistence.xml to search for eclipselink in the appropriate moduel.
> I will no copy paste bellow the LOG of the exception of the sample application.
> {panel}
> 2017-11-09 10:20:00,918 INFO [startupconnectionproblem.observer.EventObserverNestedJtaTransactionEjb] (ServerService Thread Pool -- 73)
> --------------------------------------
> -- STARTING TransactionImple < ac, BasicAction: 0:ffffac140c58:634ef22a:5a041dbd:15 status: ActionStatus.RUNNING >: observing the CDI event StartupEventToTryToReproduceIJ000311Issue
> --------------------------------------
> 2017-11-09 10:20:00,919 INFO [startupconnectionproblem.observer.EventObserverNestedJtaTransactionEjb] (ServerService Thread Pool -- 73) Doing a fetch ALL query. just to bind the connection manager to the KEY of stack level 1.
> 2017-11-09 10:20:01,033 FINE [org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor] (ServerService Thread Pool -- 73) Eclipselink connected (jndiName: java:/jdbc/SAMPLE_DS)- obtaining connection: org.jboss.jca.adapters.jdbc.jdk7.WrappedConnectionJDK7@1d5b3a43. Eclipselink Server session is: ServerSession(
> DatabaseAccessor(connected)
> PostgreSQLPlatform)
> 2017-11-09 10:20:01,057 FINE [org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor] (ServerService Thread Pool -- 73) EclipseLink - Closing connection: org.jboss.jca.adapters.jdbc.jdk7.WrappedConnectionJDK7@1d5b3a43
> 2017-11-09 10:20:01,063 INFO [startupconnectionproblem.observer.EventObserverNestedJtaTransactionEjb] (ServerService Thread Pool -- 73) At this point the org.jboss.jca.core.connectionmanager.ccm.CachedConnectionManagerImpl will have registered on stack1 the connection used by eclipselink, and unregistered as well.
> However, this temporary register/de-register is sufficient to add to the current key.getCMToConnectionsMap() and entry for the JDBC connection manger.
> This is fundamental, because without this step the exception that we will later see would be hidden away - even though the logical problem would be there no exception would be logged. By taking this magical step here, we have ensured that we will be able to reproduce the excpetion later.
> To summarize what has happend so far.
> 1. We have forced eclipselink to register and unregister a connection due to are fetch id query
> 2. The JTA transaction ID TransactionImple < ac, BasicAction: 0:ffffac140c58:634ef22a:5a041dbd:15 status: ActionStatus.RUNNING > continues healthy and running without any problem 3. Meanwhile internally, our CachedConnectionManagerImpl, for this thread, has a single task of keys in its currentObjects field
> 4. The leading key on this stack does not hold any connectionrecords (eclipselink returned the connecton to the pool) but it does have a KEY for the JDBC connection manager 5. Point 4 is the magical step to allow us to see the stack trace but not the core of the issue - just the core of seing the stack trace
> 2017-11-09 10:20:01,066 INFO [startupconnectionproblem.observer.EventObserverNestedJtaTransactionEjb] (ServerService Thread Pool -- 73) START: executoreSynchronousSameJta
> 2017-11-09 10:20:01,066 INFO [startupconnectionproblem.observer.EventObserverNestedJtaTransactionEjb] (ServerService Thread Pool -- 73) Right now we are within the same JTA transaction. But on a NESTED EJB call.
> What technically has happened at this point is that IRON JAC AMAR now has a second stack (keyAssociation)
> in the currentObjects. We add in stacks each time we go through an EJB evne if we keep the JTA transaction
> and this is the BUG. But we will see more about this later.
>
> 2017-11-09 10:20:01,066 INFO [startupconnectionproblem.observer.EventObserverNestedJtaTransactionEjb] (ServerService Thread Pool -- 73) STARTING 3: observing the CDI event StartupEventToTryToReproduceIJ000311Issue - The current transaction id is: TransactionImple < ac, BasicAction: 0:ffffac140c58:634ef22a:5a041dbd:15 status: ActionStatus.RUNNING >
> 2017-11-09 10:20:01,066 INFO [startupconnectionproblem.observer.EventObserverNestedJtaTransactionEjb] (ServerService Thread Pool -- 73) Going to create 50 entities on the database
> 2017-11-09 10:20:01,067 FINE [org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor] (ServerService Thread Pool -- 73) Eclipselink connected (jndiName: jdbc/SAMPLE_NON_JTA_DS)- obtaining connection: org.jboss.jca.adapters.jdbc.jdk7.WrappedConnectionJDK7@4be2573e. Eclipselink Server session is: ClientSession({})
> 2017-11-09 10:20:01,073 FINE [org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor] (ServerService Thread Pool -- 73) EclipseLink - Closing connection: org.jboss.jca.adapters.jdbc.jdk7.WrappedConnectionJDK7@4be2573e
> 2017-11-09 10:20:01,078 INFO [startupconnectionproblem.observer.EventObserverNestedJtaTransactionEjb] (ServerService Thread Pool -- 73) 50 entities should have been created:
> 2017-11-09 10:20:01,078 INFO [startupconnectionproblem.observer.EventObserverNestedJtaTransactionEjb] (ServerService Thread Pool -- 73) Now we want to flush - we want to force eclipselink to have to acquire the java:/jdbc/SAMPLE_DS and not be allowed to close the connection yet.
> 2017-11-09 10:20:01,085 FINE [org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor] (ServerService Thread Pool -- 73) Eclipselink connected (jndiName: java:/jdbc/SAMPLE_DS)- obtaining connection: org.jboss.jca.adapters.jdbc.jdk7.WrappedConnectionJDK7@2021ff6. Eclipselink Server session is: ClientSession({default=DatabaseAccessor(connected)})
> 2017-11-09 10:20:01,101 INFO [startupconnectionproblem.observer.EventObserverNestedJtaTransactionEjb] (ServerService Thread Pool -- 73) The flush has been done. We expec that the log now shows us that java:/jdbc/SAMPLE_DS connection was acquired but not yet closed by eclipselink.
> 2017-11-09 10:20:01,101 INFO [startupconnectionproblem.observer.EventObserverNestedJtaTransactionEjb] (ServerService Thread Pool -- 73) ENDING TransactionImple < ac, BasicAction: 0:ffffac140c58:634ef22a:5a041dbd:15 status: ActionStatus.RUNNING >: finishing to observer. The current transaction id is: {}
> 2017-11-09 10:20:01,101 INFO [startupconnectionproblem.observer.EventObserverNestedJtaTransactionEjb] (ServerService Thread Pool -- 73) OK. Now we are about to leave our nested EJB call. What is happening now is very important.
> 1. We will leave a nested EJB call but not the JTA transaction 2. Wildfly will go to the cachedConnectonManagerImpl and tell it to pop a stack (BUG) 3. The stack that goes away is actually holding the JDBC connection thta has been given to ECLIPSELINK and thta eclipselink will later try to close 4. When we leave this call we go back to STACK 1, where no connections are being managed
> 2017-11-09 10:20:01,102 INFO [startupconnectionproblem.observer.EventObserverNestedJtaTransactionEjb] (ServerService Thread Pool -- 73) FINISH: executoreSynchronousSameJta:
>
> 2017-11-09 10:20:01,102 WARN [startupconnectionproblem.observer.EventObserverNestedJtaTransactionEjb] (ServerService Thread Pool -- 73) NOW we are out of our nested EJB call. We are bout to leave the main EJB transaction call.
> In short we are about to reproduce the stack trace.
> What will now happen is the following.
> 1. Wildfly will tell eclipselink that the transaction is at an end and it should commit if it can
> 2. Eclipselink does its job of committing the changes and takes the JTA connection it was given and tries to close it
> 3. IRON JAC AMAR will log the stack trace we wanted to reproduce saying it has not idea what connection we are trying to return to the pool.
> 4. The reason the CachedConnectionManagerImpl does not know about the connection is because it was managed under STACK 2 but was popped out of existence after the nested transaction
> 5. Now it is very important to keep in mind that even if we were not seing the stack trace the error would be there (the moment we popped a key stack that had still connections) 6. Keep in mind that if we were to modify the source code and disable our reproduceBugService.fetchAlIds() call we would see no stack trace but the same issue would be present.7. The difference that doing the reproduceBugService.fetchAlIds() does is whether or not CachedConnectionManagerImpl during unregister connection leaves via the if conns == null return null or if goes further into the if (!ignoreConnections)
> 8. OK - Now that we understand what is happening behind the scenes we can let eclipselink try to close the JDBC connection at the end of the JTA transaction
>
> 2017-11-09 10:20:01,106 FINE [org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor] (ServerService Thread Pool -- 73) EclipseLink - Closing connection: org.jboss.jca.adapters.jdbc.jdk7.WrappedConnectionJDK7@2021ff6
> 2017-11-09 10:20:01,107 INFO [org.jboss.jca.core.connectionmanager.listener.TxConnectionListener] (ServerService Thread Pool -- 73) IJ000311: Throwable from unregister connection: java.lang.IllegalStateException: IJ000152: Trying to return an unknown connection: org.jboss.jca.adapters.jdbc.jdk7.WrappedConnectionJDK7@2021ff6
> at org.jboss.jca.core.connectionmanager.ccm.CachedConnectionManagerImpl.unregisterConnection(CachedConnectionManagerImpl.java:408)
> at org.jboss.jca.core.connectionmanager.listener.TxConnectionListener.connectionClosed(TxConnectionListener.java:645)
> at org.jboss.jca.adapters.jdbc.BaseWrapperManagedConnection.returnHandle(BaseWrapperManagedConnection.java:596)
> at org.jboss.jca.adapters.jdbc.BaseWrapperManagedConnection.closeHandle(BaseWrapperManagedConnection.java:541)
> at org.jboss.jca.adapters.jdbc.WrappedConnection.returnConnection(WrappedConnection.java:298)
> at org.jboss.jca.adapters.jdbc.WrappedConnection.close(WrappedConnection.java:256)
> at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.closeDatasourceConnection(DatabaseAccessor.java:522)
> at org.eclipse.persistence.internal.databaseaccess.DatasourceAccessor.closeConnection(DatasourceAccessor.java:537)
> at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.closeConnection(DatabaseAccessor.java:545)
> at org.eclipse.persistence.internal.databaseaccess.DatasourceAccessor.closeJTSConnection(DatasourceAccessor.java:188)
> at org.eclipse.persistence.sessions.server.ClientSession.releaseJTSConnection(ClientSession.java:171)
> at org.eclipse.persistence.transaction.AbstractSynchronizationListener.beforeCompletion(AbstractSynchronizationListener.java:175)
> at org.eclipse.persistence.transaction.JTASynchronizationListener.beforeCompletion(JTASynchronizationListener.java:68)
> at org.jboss.as.txn.service.internal.tsr.JCAOrderedLastSynchronizationList.beforeCompletion(JCAOrderedLastSynchronizationList.java:116)
> at com.arjuna.ats.internal.jta.resources.arjunacore.SynchronizationImple.beforeCompletion(SynchronizationImple.java:76)
> at com.arjuna.ats.arjuna.coordinator.TwoPhaseCoordinator.beforeCompletion(TwoPhaseCoordinator.java:368)
> at com.arjuna.ats.arjuna.coordinator.TwoPhaseCoordinator.end(TwoPhaseCoordinator.java:91)
> at com.arjuna.ats.arjuna.AtomicAction.commit(AtomicAction.java:162)
> at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.commitAndDisassociate(TransactionImple.java:1200)
> at com.arjuna.ats.internal.jta.transaction.arjunacore.BaseTransaction.commit(BaseTransaction.java:126)
> at com.arjuna.ats.jbossatx.BaseTransactionManagerDelegate.commit(BaseTransactionManagerDelegate.java:89)
> at org.jboss.as.ejb3.tx.CMTTxInterceptor.endTransaction(CMTTxInterceptor.java:91)
> at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInOurTx(CMTTxInterceptor.java:279)
> at org.jboss.as.ejb3.tx.CMTTxInterceptor.requiresNew(CMTTxInterceptor.java:349)
> at org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:241)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.as.ejb3.deployment.processors.StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
> at org.jboss.as.ejb3.concurrency.ContainerManagedConcurrencyInterceptor.processInvocation(ContainerManagedConcurrencyInterceptor.java:110)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:67)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.as.ejb3.component.interceptors.AdditionalSetupInterceptor.processInvocation(AdditionalSetupInterceptor.java:54)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:64)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:356)
> at org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:636)
> at org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:61)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:356)
> at org.jboss.invocation.PrivilegedWithCombinerInterceptor.processInvocation(PrivilegedWithCombinerInterceptor.java:80)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
> at org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:198)
> at org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
> at org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:73)
> at startupconnectionproblem.observer.EventObserverNestedJtaTransactionEjb$$$view9.observeStartupEventToTryToReproduceIJ000311Issue(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.jboss.weld.util.reflection.Reflections.invokeAndUnwrap(Reflections.java:433)
> at org.jboss.weld.bean.proxy.EnterpriseBeanProxyMethodHandler.invoke(EnterpriseBeanProxyMethodHandler.java:128)
> at org.jboss.weld.bean.proxy.EnterpriseTargetBeanInstance.invoke(EnterpriseTargetBeanInstance.java:56)
> at org.jboss.weld.bean.proxy.ProxyMethodHandler.invoke(ProxyMethodHandler.java:100)
> at startupconnectionproblem.observer.EventObserverNestedJtaTransactionEjb$Proxy$_$$_Weld$EnterpriseProxy$.observeStartupEventToTryToReproduceIJ000311Issue(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.jboss.weld.injection.StaticMethodInjectionPoint.invoke(StaticMethodInjectionPoint.java:88)
> at org.jboss.weld.injection.StaticMethodInjectionPoint.invoke(StaticMethodInjectionPoint.java:78)
> at org.jboss.weld.injection.MethodInvocationStrategy$SimpleMethodInvocationStrategy.invoke(MethodInvocationStrategy.java:129)
> at org.jboss.weld.event.ObserverMethodImpl.sendEvent(ObserverMethodImpl.java:313)
> at org.jboss.weld.event.ObserverMethodImpl.sendEvent(ObserverMethodImpl.java:291)
> at org.jboss.weld.event.ObserverMethodImpl.notify(ObserverMethodImpl.java:269)
> at org.jboss.weld.event.ObserverNotifier.notifySyncObservers(ObserverNotifier.java:302)
> at org.jboss.weld.event.ObserverNotifier.notify(ObserverNotifier.java:291)
> at org.jboss.weld.event.EventImpl.fire(EventImpl.java:89)
> at startupconnectionproblem.event.FireStartupEventsServiceImpl.fireStartupCDIEvents(FireStartupEventsServiceImpl.java:41)
> at startupconnectionproblem.event.FireStartupEventsServiceImpl$Proxy$_$$_WeldClientProxy.fireStartupCDIEvents(Unknown Source)
> at startupconnectionproblem.event.StartupSingletonLocalEjb.postConstrcut(StartupSingletonLocalEjb.java:38)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptor.java:96)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doLifecycleInterception(Jsr299BindingsInterceptor.java:114)
> at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:103)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
> at org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73)
> at org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:83)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.as.weld.injection.WeldInjectionInterceptor.processInvocation(WeldInjectionInterceptor.java:53)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.as.ee.component.AroundConstructInterceptorFactory$1.processInvocation(AroundConstructInterceptorFactory.java:28)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.as.weld.injection.WeldInterceptorInjectionInterceptor.processInvocation(WeldInterceptorInjectionInterceptor.java:56)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.as.weld.ejb.Jsr299BindingsCreateInterceptor.processInvocation(Jsr299BindingsCreateInterceptor.java:100)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:263)
> at org.jboss.as.ejb3.tx.LifecycleCMTTxInterceptor.notSupported(LifecycleCMTTxInterceptor.java:87)
> at org.jboss.as.ejb3.tx.LifecycleCMTTxInterceptor.processInvocation(LifecycleCMTTxInterceptor.java:64)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.as.weld.injection.WeldInjectionContextInterceptor.processInvocation(WeldInjectionContextInterceptor.java:43)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:64)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.as.ejb3.component.singleton.StartupCountDownInterceptor.processInvocation(StartupCountDownInterceptor.java:25)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:356)
> at org.jboss.invocation.PrivilegedWithCombinerInterceptor.processInvocation(PrivilegedWithCombinerInterceptor.java:80)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
> at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
> at org.jboss.as.ee.component.BasicComponent.constructComponentInstance(BasicComponent.java:161)
> at org.jboss.as.ee.component.BasicComponent.constructComponentInstance(BasicComponent.java:134)
> at org.jboss.as.ee.component.BasicComponent.createInstance(BasicComponent.java:88)
> at org.jboss.as.ejb3.component.singleton.SingletonComponent.getComponentInstance(SingletonComponent.java:124)
> at org.jboss.as.ejb3.component.singleton.SingletonComponent.start(SingletonComponent.java:138)
> at org.jboss.as.ee.component.ComponentStartService$1.run(ComponentStartService.java:54)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> at org.jboss.threads.JBossThread.run(JBossThread.java:320)
> 2017-11-09 10:20:01,114 INFO [startupconnectionproblem.event.FireStartupEventsServiceImpl] (ServerService Thread Pool -- 73) FIINSH: StartupEventToTryToReproduceIJ000311Issue
> 2017-11-09 10:20:01,114 INFO [startupconnectionproblem.event.StartupSingletonLocalEjb] (ServerService Thread Pool -- 73) ENDING 1: Post construct of startup bean singleton ending
> 2017-11-09 10:20:01,240 INFO [javax.enterprise.resource.webcontainer.jsf.config] (ServerService Thread Pool -- 73) Initializing Mojarra 2.2.13.SP1 20160303-1204 for context '/app'
> 2017-11-09 10:20:02,098 INFO [org.primefaces.webapp.PostConstructA
> {panel}
> Thanks for considering this issue.
> I do not know how much impact this issue can actually have.
> We have many hundred different types of JTA business transactions, and we see this sample application happening on system tests and stress tests relatively often.
> Now that I have better understand of the issue, I can in some cases rework the code to dodge this problem to avoid nested calls of EJBS on the same transaciton - but this will not always be possible.
> And I would be quite afraid of this issue, if means that our connectons still in the hands of ElcipseLInk are going back to the pool.
> This would explain some strange errors we have had only on stress test application, that we do not see when using a different app server.
> I must say it was extremelly difficult to be able to make the stack trace reproducible.
> There are many tricky points to be able to lead the code to go precisely through the code path that leads to the excpetion.
> But it is relatively trivial to make the code leave through the "silent bug iron jac amar code path".
> Many thanks for all the help on this.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (JGRP-2239) AUTH + ASYM_ENCRYPT causes problem with re-joining cluster (MERGE)
by Boris Sh (JIRA)
[ https://issues.jboss.org/browse/JGRP-2239?page=com.atlassian.jira.plugin.... ]
Boris Sh updated JGRP-2239:
---------------------------
Description:
Hello,
I am using the following configuration:
{code:java}
<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="urn:org:jgroups" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd">
<UDP />
<PING />
<MERGE3 />
<FD />
<VERIFY_SUSPECT />
<ASYM_ENCRYPT encrypt_entire_message="true" sym_keylength="128"
sym_algorithm="AES/ECB/PKCS5Padding" asym_keylength="2048"
asym_algorithm="RSA" />
<pbcast.NAKACK2 />
<UNICAST3 />
<pbcast.STABLE />
<FRAG2 />
<AUTH auth_class="org.jgroups.auth.X509Token" auth_value="auth"
keystore_path="keystore.jks" keystore_password="pwd" cert_alias="alias"
cipher_type="RSA" />
<pbcast.GMS />
</config>
{code}
I have 7 services, but will try to show logs for 2 ones, coordinator and some random node, and all the other nodes behave similarly.
Initially, when these nodes join the cluster, everything is fine.
The server is a shared machine with slow CPU and also slow HDD, so sometimes, when other applications are busy with their tasks, whole my cluster can get frozen for 3-5 minutes. During/in the end of this freeze, some service may tell me the following (in logs):
{code:java}
org.jgroups.protocols.FD up
WARNING: node-26978: I was suspected by node-27291; ignoring the SUSPECT message and sending back a HEARTBEAT_ACK
WARNING: node-26978: unrecognized cipher; discarding message from node-27291
org.jgroups.protocols.Encrypt handleEncryptedMessage
WARNING: node-26978: unrecognized cipher; discarding message from node-27291
org.jgroups.protocols.Encrypt handleEncryptedMessage
WARNING: node-26978: unrecognized cipher; discarding message from node-36734
org.jgroups.protocols.Encrypt handleEncryptedMessage
{code}
so the node was kicked out from the cluster, as it became "suspect", but the node doesn't agree with that fact. Cluster coordinator has already changed sym private key, so in the further logs of this server I see "unrecognized cipher".
In cluster coordinator logs I see the following:
{code:java}
INFO: ISPN100000: Node node-26978 joined the cluster
****
WARN: node-27291: unrecognized cipher; discarding message from node-26978
org.jgroups.logging.Slf4jLogImpl error
ERROR: key requester node-26978 is not in current view [***]; ignoring key request
org.jgroups.logging.Slf4jLogImpl warn
WARN: node-27291: unrecognized cipher; discarding message from node-26978
INFO: ISPN000093: Received new, MERGED cluster view for channel ISPN: MergeView::[node-26978|8] (7) [node-26978, node-12721, node-17625, node-45936, node-56674, node-36734, node-27291], 2 subgroups: [node-27291|7] (6) [node-27291, node-12721, node-17625, node-45936, node-56674, node-36734], [node-27291|6] (7) [node-27291, node-26978, node-12721, node-17625, node-45936, node-56674, node-36734]
{code}
My understanding of what has happened:
For example I have 3 nodes {A, B, C} in the cluster. The cluster gets frozen for some minutes, so node {C} becomes suspected, and kicked out from the cluster by coordinator. For some reason {C} ignores that fact. Later, after cluster is up again, it becomes ignoring messages from {C}, because it is using ASYM encryption and private key has been re-generated by coordinator. Also, for some reason MERGE operation doesn't work, and {C} can not join back to cluster, and now cluster has 2 subgroups, that don't communicate to each other, and I don't fully understand why this happens.
How I temporary resolved this issue: changed ASYM_ENCRYPT to SYM_ENCRYPT, and now any node can come back to the cluster successfully after freeze, as the key doesn't change.
Also, I didn't test, but think change_key_on_leave="false" will help, but this is not the way I want to use.
So looks like this a problem with AUTH + ASYM_ENCRYPT protocol combination, when node in some cases can not rejoin the cluster.
was:
Hello,
I am using the following configuration:
{code:java}
<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="urn:org:jgroups" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd">
<UDP />
<PING />
<MERGE3 />
<FD />
<VERIFY_SUSPECT />
<ASYM_ENCRYPT encrypt_entire_message="true" sym_keylength="128"
sym_algorithm="AES/ECB/PKCS5Padding" asym_keylength="2048"
asym_algorithm="RSA" />
<pbcast.NAKACK2 />
<UNICAST3 />
<pbcast.STABLE />
<FRAG2 />
<AUTH auth_class="org.jgroups.auth.X509Token" auth_value="auth"
keystore_path="keystore.jks" keystore_password="pwd" cert_alias="alias"
cipher_type="RSA" />
<pbcast.GMS />
</config>
{code}
I have 7 services, but will try to show logs for 2 ones, coordinator and some random node, and all the other nodes behave similarly.
Initially, when these nodes join the cluster, everything is fine.
The server is a shared machine with slow CPU and also slow HDD, so sometimes, when other applications are busy with their tasks, whole my cluster can get frozen for 3-5 minutes. During/in the end of this freeze, some service may tell me the following (in logs):
{code:java}
org.jgroups.protocols.FD up
WARNING: node-26978: I was suspected by node-27291; ignoring the SUSPECT message and sending back a HEARTBEAT_ACK
WARNING: node-26978: unrecognized cipher; discarding message from node-27291
org.jgroups.protocols.Encrypt handleEncryptedMessage
WARNING: node-26978: unrecognized cipher; discarding message from node-27291
org.jgroups.protocols.Encrypt handleEncryptedMessage
WARNING: node-26978: unrecognized cipher; discarding message from node-36734
org.jgroups.protocols.Encrypt handleEncryptedMessage
{code}
so the node was kicked out from the cluster, as it became "suspect", but the node doesn't agree with that fact. Cluster coordinator has already changed sym private key, so in the further logs of this server I see "unrecognized cipher".
In cluster coordinator logs I see the following:
{code:java}
INFO: ISPN100000: Node node-26978 joined the cluster
****
WARN: node-27291: unrecognized cipher; discarding message from node-26978
org.jgroups.logging.Slf4jLogImpl error
ERROR: key requester node-26978 is not in current view [***]; ignoring key request
org.jgroups.logging.Slf4jLogImpl warn
WARN: node-27291: unrecognized cipher; discarding message from node-26978
INFO: ISPN000093: Received new, MERGED cluster view for channel ISPN: MergeView::[node-26978|8] (7) [node-26978, node-12721, node-17625, node-45936, node-56674, node-36734, node-27291], 2 subgroups: [node-27291|7] (6) [node-27291, node-12721, node-17625, node-45936, node-56674, node-36734], [node-27291|6] (7) [node-27291, node-26978, node-12721, node-17625, node-45936, node-56674, node-36734]
{code}
My understanding of what has happened:
For example I have 3 nodes {A, B, C} in the cluster. The cluster gets frozen for some minutes, so node {C} becomes suspected, and kicked out from the cluster by coordinator. For some reason {C} ignores that fact. Later, after cluster is up again, it becomes ignoring messages from {C}, because it is using ASYM encryption and private key has been re-generated by coordinator. Also, for some reason MERGE operation doesn't work, and {C} can not join back to cluster, and now cluster has 2 subgroups, that don't communicate to each other, and I don't fully understand why this happens.
How I temporary resolved this issue: changed ASYM_ENCRYPT to SYM_ENCRYPT, and now any node can come back to the cluster successfully after freeze, and the key doesn't change.
Also, I didn't test, but think change_key_on_leave="false" will help, but this is not the way I want to use.
So looks like this a problem with AUTH + ASYM_ENCRYPT protocol combination, when node in some cases can not rejoin the cluster.
> AUTH + ASYM_ENCRYPT causes problem with re-joining cluster (MERGE)
> ------------------------------------------------------------------
>
> Key: JGRP-2239
> URL: https://issues.jboss.org/browse/JGRP-2239
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.6
> Environment: Infinispan 9.1.1 + JGroups 4.0.6.Final + Vert.x 3.5.0
> Reporter: Boris Sh
> Assignee: Bela Ban
>
> Hello,
> I am using the following configuration:
> {code:java}
> <config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> xmlns="urn:org:jgroups" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd">
> <UDP />
> <PING />
> <MERGE3 />
> <FD />
> <VERIFY_SUSPECT />
> <ASYM_ENCRYPT encrypt_entire_message="true" sym_keylength="128"
> sym_algorithm="AES/ECB/PKCS5Padding" asym_keylength="2048"
> asym_algorithm="RSA" />
> <pbcast.NAKACK2 />
> <UNICAST3 />
> <pbcast.STABLE />
> <FRAG2 />
> <AUTH auth_class="org.jgroups.auth.X509Token" auth_value="auth"
> keystore_path="keystore.jks" keystore_password="pwd" cert_alias="alias"
> cipher_type="RSA" />
> <pbcast.GMS />
> </config>
> {code}
> I have 7 services, but will try to show logs for 2 ones, coordinator and some random node, and all the other nodes behave similarly.
> Initially, when these nodes join the cluster, everything is fine.
> The server is a shared machine with slow CPU and also slow HDD, so sometimes, when other applications are busy with their tasks, whole my cluster can get frozen for 3-5 minutes. During/in the end of this freeze, some service may tell me the following (in logs):
> {code:java}
> org.jgroups.protocols.FD up
> WARNING: node-26978: I was suspected by node-27291; ignoring the SUSPECT message and sending back a HEARTBEAT_ACK
> WARNING: node-26978: unrecognized cipher; discarding message from node-27291
> org.jgroups.protocols.Encrypt handleEncryptedMessage
> WARNING: node-26978: unrecognized cipher; discarding message from node-27291
> org.jgroups.protocols.Encrypt handleEncryptedMessage
> WARNING: node-26978: unrecognized cipher; discarding message from node-36734
> org.jgroups.protocols.Encrypt handleEncryptedMessage
> {code}
> so the node was kicked out from the cluster, as it became "suspect", but the node doesn't agree with that fact. Cluster coordinator has already changed sym private key, so in the further logs of this server I see "unrecognized cipher".
> In cluster coordinator logs I see the following:
> {code:java}
> INFO: ISPN100000: Node node-26978 joined the cluster
> ****
> WARN: node-27291: unrecognized cipher; discarding message from node-26978
> org.jgroups.logging.Slf4jLogImpl error
> ERROR: key requester node-26978 is not in current view [***]; ignoring key request
> org.jgroups.logging.Slf4jLogImpl warn
> WARN: node-27291: unrecognized cipher; discarding message from node-26978
> INFO: ISPN000093: Received new, MERGED cluster view for channel ISPN: MergeView::[node-26978|8] (7) [node-26978, node-12721, node-17625, node-45936, node-56674, node-36734, node-27291], 2 subgroups: [node-27291|7] (6) [node-27291, node-12721, node-17625, node-45936, node-56674, node-36734], [node-27291|6] (7) [node-27291, node-26978, node-12721, node-17625, node-45936, node-56674, node-36734]
> {code}
> My understanding of what has happened:
> For example I have 3 nodes {A, B, C} in the cluster. The cluster gets frozen for some minutes, so node {C} becomes suspected, and kicked out from the cluster by coordinator. For some reason {C} ignores that fact. Later, after cluster is up again, it becomes ignoring messages from {C}, because it is using ASYM encryption and private key has been re-generated by coordinator. Also, for some reason MERGE operation doesn't work, and {C} can not join back to cluster, and now cluster has 2 subgroups, that don't communicate to each other, and I don't fully understand why this happens.
> How I temporary resolved this issue: changed ASYM_ENCRYPT to SYM_ENCRYPT, and now any node can come back to the cluster successfully after freeze, as the key doesn't change.
> Also, I didn't test, but think change_key_on_leave="false" will help, but this is not the way I want to use.
> So looks like this a problem with AUTH + ASYM_ENCRYPT protocol combination, when node in some cases can not rejoin the cluster.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (JGRP-2239) AUTH + ASYM_ENCRYPT causes problem with re-joining cluster (MERGE)
by Boris Sh (JIRA)
[ https://issues.jboss.org/browse/JGRP-2239?page=com.atlassian.jira.plugin.... ]
Boris Sh updated JGRP-2239:
---------------------------
Description:
Hello,
I am using the following configuration:
{code:java}
<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="urn:org:jgroups" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd">
<UDP />
<PING />
<MERGE3 />
<FD />
<VERIFY_SUSPECT />
<ASYM_ENCRYPT encrypt_entire_message="true" sym_keylength="128"
sym_algorithm="AES/ECB/PKCS5Padding" asym_keylength="2048"
asym_algorithm="RSA" />
<pbcast.NAKACK2 />
<UNICAST3 />
<pbcast.STABLE />
<FRAG2 />
<AUTH auth_class="org.jgroups.auth.X509Token" auth_value="auth"
keystore_path="keystore.jks" keystore_password="pwd" cert_alias="alias"
cipher_type="RSA" />
<pbcast.GMS />
</config>
{code}
I have 7 services, but will try to show logs for 2 ones, coordinator and some random node, and all the other nodes behave similarly.
Initially, when these nodes join the cluster, everything is fine.
The server is a shared machine with slow CPU and also slow HDD, so sometimes, when other applications are busy with their tasks, whole my cluster can get frozen for 3-5 minutes. During/in the end of this freeze, some service may tell me the following (in logs):
{code:java}
org.jgroups.protocols.FD up
WARNING: node-26978: I was suspected by node-27291; ignoring the SUSPECT message and sending back a HEARTBEAT_ACK
WARNING: node-26978: unrecognized cipher; discarding message from node-27291
org.jgroups.protocols.Encrypt handleEncryptedMessage
WARNING: node-26978: unrecognized cipher; discarding message from node-27291
org.jgroups.protocols.Encrypt handleEncryptedMessage
WARNING: node-26978: unrecognized cipher; discarding message from node-36734
org.jgroups.protocols.Encrypt handleEncryptedMessage
{code}
so the node was kicked out from the cluster, as it became "suspect", but the node doesn't agree with that fact. Cluster coordinator has already changed sym private key, so in the further logs of this server I see "unrecognized cipher".
In cluster coordinator logs I see the following:
{code:java}
INFO: ISPN100000: Node node-26978 joined the cluster
****
WARN: node-27291: unrecognized cipher; discarding message from node-26978
org.jgroups.logging.Slf4jLogImpl error
ERROR: key requester node-26978 is not in current view [***]; ignoring key request
org.jgroups.logging.Slf4jLogImpl warn
WARN: node-27291: unrecognized cipher; discarding message from node-26978
INFO: ISPN000093: Received new, MERGED cluster view for channel ISPN: MergeView::[node-26978|8] (7) [node-26978, node-12721, node-17625, node-45936, node-56674, node-36734, node-27291], 2 subgroups: [node-27291|7] (6) [node-27291, node-12721, node-17625, node-45936, node-56674, node-36734], [node-27291|6] (7) [node-27291, node-26978, node-12721, node-17625, node-45936, node-56674, node-36734]
{code}
My understanding of what has happened:
For example I have 3 nodes {A, B, C} in the cluster. The cluster gets frozen for some minutes, so node {C} becomes suspected, and kicked out from the cluster by coordinator. For some reason {C} ignores that fact. Later, after cluster is up again, it becomes ignoring messages from {C}, because it is using ASYM encryption and private key has been re-generated by coordinator. Also, for some reason MERGE operation doesn't work, and {C} can not join back to cluster, and now cluster has 2 subgroups, that don't communicate to each other, and I don't fully understand why this happens.
How I temporary resolved this issue: changed ASYM_ENCRYPT to SYM_ENCRYPT, and now any node can come back to the cluster successfully after freeze, and the key doesn't change.
Also, I didn't test, but think change_key_on_leave="false" will help, but this is not the way I want to use.
So looks like this a problem with AUTH + ASYM_ENCRYPT protocol combination, when node in some cases can not rejoin the cluster.
was:
Hello,
I am using the following configuration:
{code:java}
<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="urn:org:jgroups" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd">
<UDP />
<PING />
<MERGE3 />
<FD />
<VERIFY_SUSPECT />
<ASYM_ENCRYPT encrypt_entire_message="true" sym_keylength="128"
sym_algorithm="AES/ECB/PKCS5Padding" asym_keylength="2048"
asym_algorithm="RSA" />
<pbcast.NAKACK2 />
<UNICAST3 />
<pbcast.STABLE />
<FRAG2 />
<AUTH auth_class="org.jgroups.auth.X509Token" auth_value="auth"
keystore_path="keystore.jks" keystore_password="pwd" cert_alias="alias"
cipher_type="RSA" />
<pbcast.GMS />
</config>
{code}
I have 7 services, but will try to show logs for 2 ones, coordinator and some random node, and all the other nodes behave similarly.
Initially, when these nodes join the cluster, everything is fine.
The server is a shared machine with slow CPU and also slow HDD, so sometimes, when other applications are busy with their tasks, whole my cluster can get frozen for 3-5 minutes. During/in the end of this freeze, some service may tell me the following (in logs):
{code:java}
org.jgroups.protocols.FD up
WARNING: node-26978: I was suspected by node-27291; ignoring the SUSPECT message and sending back a HEARTBEAT_ACK
WARNING: node-26978: unrecognized cipher; discarding message from node-27291
org.jgroups.protocols.Encrypt handleEncryptedMessage
WARNING: node-26978: unrecognized cipher; discarding message from node-27291
org.jgroups.protocols.Encrypt handleEncryptedMessage
WARNING: node-26978: unrecognized cipher; discarding message from node-36734
org.jgroups.protocols.Encrypt handleEncryptedMessage
{code}
so the node was kicked out from the cluster, as it became "suspect", but the node doesn't agree with that fact. Cluster coordinator has already changed sym private key, so in the further logs of this server I see "unrecognized cipher".
In cluster coordinator logs I see the following:
{code:java}
INFO: ISPN100000: Node node-26978 joined the cluster
****
WARN: node-27291: unrecognized cipher; discarding message from node-26978
org.jgroups.logging.Slf4jLogImpl error
ERROR: key requester node-26978 is not in current view [***]; ignoring key request
org.jgroups.logging.Slf4jLogImpl warn
WARN: node-27291: unrecognized cipher; discarding message from node-26978
INFO: ISPN000093: Received new, MERGED cluster view for channel ISPN: MergeView::[node-26978|8] (7) [node-26978, node-12721, node-17625, node-45936, node-56674, node-36734, node-27291], 2 subgroups: [node-27291|7] (6) [node-27291, node-12721, node-17625, node-45936, node-56674, node-36734], [node-27291|6] (7) [node-27291, node-26978, node-12721, node-17625, node-45936, node-56674, node-36734]
{code}
My understanding of what has happened:
For example I have 3 nodes {A, B, C} in the cluster. The cluster gets frozen for some minutes, so node {C} becomes suspected, and kicked out from the cluster by coordinator. For some reason {C} ignores that fact. Later, after cluster is up again, it becomes ignoring messages from {C}, because it is using ASYM encryption and private key has been re-generated by coordinator. Also, for some reason MERGE operation doesn't work, and {C} can not join back to cluster, and now cluster has 2 subgroups, that don't communicate to each other, and don't fully understand why this happens.
How I temporary resolved this issue: changed ASYM_ENCRYPT to SYM_ENCRYPT, and now any node can come back to the cluster successfully after freeze, and the key doesn't change.
Also, I didn't test, but think change_key_on_leave="false" will help, but this is not the way I want to use.
So looks like this a problem with AUTH + ASYM_ENCRYPT protocol combination, when node in some cases can not rejoin the cluster.
> AUTH + ASYM_ENCRYPT causes problem with re-joining cluster (MERGE)
> ------------------------------------------------------------------
>
> Key: JGRP-2239
> URL: https://issues.jboss.org/browse/JGRP-2239
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.6
> Environment: Infinispan 9.1.1 + JGroups 4.0.6.Final + Vert.x 3.5.0
> Reporter: Boris Sh
> Assignee: Bela Ban
>
> Hello,
> I am using the following configuration:
> {code:java}
> <config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> xmlns="urn:org:jgroups" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd">
> <UDP />
> <PING />
> <MERGE3 />
> <FD />
> <VERIFY_SUSPECT />
> <ASYM_ENCRYPT encrypt_entire_message="true" sym_keylength="128"
> sym_algorithm="AES/ECB/PKCS5Padding" asym_keylength="2048"
> asym_algorithm="RSA" />
> <pbcast.NAKACK2 />
> <UNICAST3 />
> <pbcast.STABLE />
> <FRAG2 />
> <AUTH auth_class="org.jgroups.auth.X509Token" auth_value="auth"
> keystore_path="keystore.jks" keystore_password="pwd" cert_alias="alias"
> cipher_type="RSA" />
> <pbcast.GMS />
> </config>
> {code}
> I have 7 services, but will try to show logs for 2 ones, coordinator and some random node, and all the other nodes behave similarly.
> Initially, when these nodes join the cluster, everything is fine.
> The server is a shared machine with slow CPU and also slow HDD, so sometimes, when other applications are busy with their tasks, whole my cluster can get frozen for 3-5 minutes. During/in the end of this freeze, some service may tell me the following (in logs):
> {code:java}
> org.jgroups.protocols.FD up
> WARNING: node-26978: I was suspected by node-27291; ignoring the SUSPECT message and sending back a HEARTBEAT_ACK
> WARNING: node-26978: unrecognized cipher; discarding message from node-27291
> org.jgroups.protocols.Encrypt handleEncryptedMessage
> WARNING: node-26978: unrecognized cipher; discarding message from node-27291
> org.jgroups.protocols.Encrypt handleEncryptedMessage
> WARNING: node-26978: unrecognized cipher; discarding message from node-36734
> org.jgroups.protocols.Encrypt handleEncryptedMessage
> {code}
> so the node was kicked out from the cluster, as it became "suspect", but the node doesn't agree with that fact. Cluster coordinator has already changed sym private key, so in the further logs of this server I see "unrecognized cipher".
> In cluster coordinator logs I see the following:
> {code:java}
> INFO: ISPN100000: Node node-26978 joined the cluster
> ****
> WARN: node-27291: unrecognized cipher; discarding message from node-26978
> org.jgroups.logging.Slf4jLogImpl error
> ERROR: key requester node-26978 is not in current view [***]; ignoring key request
> org.jgroups.logging.Slf4jLogImpl warn
> WARN: node-27291: unrecognized cipher; discarding message from node-26978
> INFO: ISPN000093: Received new, MERGED cluster view for channel ISPN: MergeView::[node-26978|8] (7) [node-26978, node-12721, node-17625, node-45936, node-56674, node-36734, node-27291], 2 subgroups: [node-27291|7] (6) [node-27291, node-12721, node-17625, node-45936, node-56674, node-36734], [node-27291|6] (7) [node-27291, node-26978, node-12721, node-17625, node-45936, node-56674, node-36734]
> {code}
> My understanding of what has happened:
> For example I have 3 nodes {A, B, C} in the cluster. The cluster gets frozen for some minutes, so node {C} becomes suspected, and kicked out from the cluster by coordinator. For some reason {C} ignores that fact. Later, after cluster is up again, it becomes ignoring messages from {C}, because it is using ASYM encryption and private key has been re-generated by coordinator. Also, for some reason MERGE operation doesn't work, and {C} can not join back to cluster, and now cluster has 2 subgroups, that don't communicate to each other, and I don't fully understand why this happens.
> How I temporary resolved this issue: changed ASYM_ENCRYPT to SYM_ENCRYPT, and now any node can come back to the cluster successfully after freeze, and the key doesn't change.
> Also, I didn't test, but think change_key_on_leave="false" will help, but this is not the way I want to use.
> So looks like this a problem with AUTH + ASYM_ENCRYPT protocol combination, when node in some cases can not rejoin the cluster.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (JGRP-2239) AUTH + ASYM_ENCRYPT causes problem with re-joining cluster (MERGE)
by Boris Sh (JIRA)
Boris Sh created JGRP-2239:
------------------------------
Summary: AUTH + ASYM_ENCRYPT causes problem with re-joining cluster (MERGE)
Key: JGRP-2239
URL: https://issues.jboss.org/browse/JGRP-2239
Project: JGroups
Issue Type: Bug
Affects Versions: 4.0.6
Environment: Infinispan 9.1.1 + JGroups 4.0.6.Final + Vert.x 3.5.0
Reporter: Boris Sh
Assignee: Bela Ban
Hello,
I am using the following configuration:
{code:java}
<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="urn:org:jgroups" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd">
<UDP />
<PING />
<MERGE3 />
<FD />
<VERIFY_SUSPECT />
<ASYM_ENCRYPT encrypt_entire_message="true" sym_keylength="128"
sym_algorithm="AES/ECB/PKCS5Padding" asym_keylength="2048"
asym_algorithm="RSA" />
<pbcast.NAKACK2 />
<UNICAST3 />
<pbcast.STABLE />
<FRAG2 />
<AUTH auth_class="org.jgroups.auth.X509Token" auth_value="auth"
keystore_path="keystore.jks" keystore_password="pwd" cert_alias="alias"
cipher_type="RSA" />
<pbcast.GMS />
</config>
{code}
I have 7 services, but will try to show logs for 2 ones, coordinator and some random node, and all the other nodes behave similarly.
Initially, when these nodes join the cluster, everything is fine.
The server is a shared machine with slow CPU and also slow HDD, so sometimes, when other applications are busy with their tasks, whole my cluster can get frozen for 3-5 minutes. During/in the end of this freeze, some service may tell me the following (in logs):
{code:java}
org.jgroups.protocols.FD up
WARNING: node-26978: I was suspected by node-27291; ignoring the SUSPECT message and sending back a HEARTBEAT_ACK
WARNING: node-26978: unrecognized cipher; discarding message from node-27291
org.jgroups.protocols.Encrypt handleEncryptedMessage
WARNING: node-26978: unrecognized cipher; discarding message from node-27291
org.jgroups.protocols.Encrypt handleEncryptedMessage
WARNING: node-26978: unrecognized cipher; discarding message from node-36734
org.jgroups.protocols.Encrypt handleEncryptedMessage
{code}
so the node was kicked out from the cluster, as it became "suspect", but the node doesn't agree with that fact. Cluster coordinator has already changed sym private key, so in the further logs of this server I see "unrecognized cipher".
In cluster coordinator logs I see the following:
{code:java}
INFO: ISPN100000: Node node-26978 joined the cluster
****
WARN: node-27291: unrecognized cipher; discarding message from node-26978
org.jgroups.logging.Slf4jLogImpl error
ERROR: key requester node-26978 is not in current view [***]; ignoring key request
org.jgroups.logging.Slf4jLogImpl warn
WARN: node-27291: unrecognized cipher; discarding message from node-26978
INFO: ISPN000093: Received new, MERGED cluster view for channel ISPN: MergeView::[node-26978|8] (7) [node-26978, node-12721, node-17625, node-45936, node-56674, node-36734, node-27291], 2 subgroups: [node-27291|7] (6) [node-27291, node-12721, node-17625, node-45936, node-56674, node-36734], [node-27291|6] (7) [node-27291, node-26978, node-12721, node-17625, node-45936, node-56674, node-36734]
{code}
My understanding of what has happened:
For example I have 3 nodes {A, B, C} in the cluster. The cluster gets frozen for some minutes, so node {C} becomes suspected, and kicked out from the cluster by coordinator. For some reason {C} ignores that fact. Later, after cluster is up again, it becomes ignoring messages from {C}, because it is using ASYM encryption and private key has been re-generated by coordinator. Also, for some reason MERGE operation doesn't work, and {C} can not join back to cluster, and now cluster has 2 subgroups, that don't communicate to each other, and don't fully understand why this happens.
How I temporary resolved this issue: changed ASYM_ENCRYPT to SYM_ENCRYPT, and now any node can come back to the cluster successfully after freeze, and the key doesn't change.
Also, I didn't test, but think change_key_on_leave="false" will help, but this is not the way I want to use.
So looks like this a problem with AUTH + ASYM_ENCRYPT protocol combination, when node in some cases can not rejoin the cluster.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (ELY-283) Investigate Elytron and gssproxy interoperability
by Jan Kalina (JIRA)
[ https://issues.jboss.org/browse/ELY-283?page=com.atlassian.jira.plugin.sy... ]
Jan Kalina edited comment on ELY-283 at 12/3/17 2:35 PM:
---------------------------------------------------------
Few notes:
* property *sun.security.jgss.native* needs to be set using JAVA_OPTS="-D ..." before standalone.sh (setting in standalone.xml is applied to late and is ignored by JVM)
* to debug SunNativeProvider add property: *sun.security.nativegss.debug=true*
* currently Elytron initialize LoginContext/Krb5LoginModule even if native provider should be used - should be made optional for future, but for now there is need to provide keystore with given credential to continue (obtained credential should be ignored by GSSManager if correctly set)
* To start: {code}GSSPROXY_BEHAVIOR=REMOTE_FIRST GSS_USE_PROXY=1 JAVA_OPTS="-Djavax.security.auth.useSubjectCredsOnly=false -Dsun.security.jgss.lib=/usr/lib64/libgssapi_krb5.so.2.2 -Dsun.security.jgss.native=true -Dsun.security.nativegss.debug=true -Dsun.security.krb5.debug=true -Dsun.security.jgss.debug=true" bin/standalone.sh{code}
was (Author: honza889):
Few notes:
* property *sun.security.jgss.native* needs to be set using JAVA_OPTS="-D ..." before standalone.sh (setting in standalone.xml is applied to late and is ignored by JVM)
* to debug SunNativeProvider add property: *sun.security.nativegss.debug=true*
* To start: {code}GSSPROXY_BEHAVIOR=REMOTE_FIRST GSS_USE_PROXY=1 JAVA_OPTS="-Djavax.security.auth.useSubjectCredsOnly=false -Dsun.security.jgss.lib=/usr/lib64/libgssapi_krb5.so.2.2 -Dsun.security.jgss.native=true -Dsun.security.nativegss.debug=true -Dsun.security.krb5.debug=true -Dsun.security.jgss.debug=true" bin/standalone.sh{code}
> Investigate Elytron and gssproxy interoperability
> -------------------------------------------------
>
> Key: ELY-283
> URL: https://issues.jboss.org/browse/ELY-283
> Project: WildFly Elytron
> Issue Type: Task
> Components: SASL
> Reporter: Peter Skopek
> Assignee: Jan Kalina
> Fix For: 2.0.0.Alpha1
>
>
> Investigate Elytron and gssproxy interoperability.
> https://fedorahosted.org/gss-proxy/
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (ELY-283) Investigate Elytron and gssproxy interoperability
by Jan Kalina (JIRA)
[ https://issues.jboss.org/browse/ELY-283?page=com.atlassian.jira.plugin.sy... ]
Jan Kalina edited comment on ELY-283 at 12/3/17 2:20 PM:
---------------------------------------------------------
Few notes:
* property *sun.security.jgss.native* needs to be set using JAVA_OPTS="-D ..." before standalone.sh (setting in standalone.xml is applied to late and is ignored by JVM)
* to debug SunNativeProvider add property: *sun.security.nativegss.debug=true*
* To start: {code}GSSPROXY_BEHAVIOR=REMOTE_FIRST GSS_USE_PROXY=1 JAVA_OPTS="-Djavax.security.auth.useSubjectCredsOnly=false -Dsun.security.jgss.lib=/usr/lib64/libgssapi_krb5.so.2.2 -Dsun.security.jgss.native=true -Dsun.security.nativegss.debug=true -Dsun.security.krb5.debug=true -Dsun.security.jgss.debug=true" bin/standalone.sh{code}
was (Author: honza889):
Few notes:
* property *sun.security.jgss.native* needs to be set using JAVA_OPTS="-D ..." before standalone.sh (setting in standalone.xml is applied to late and is ignored by JVM)
* to debug SunNativeProvider add property: *sun.security.nativegss.debug=true*
> Investigate Elytron and gssproxy interoperability
> -------------------------------------------------
>
> Key: ELY-283
> URL: https://issues.jboss.org/browse/ELY-283
> Project: WildFly Elytron
> Issue Type: Task
> Components: SASL
> Reporter: Peter Skopek
> Assignee: Jan Kalina
> Fix For: 2.0.0.Alpha1
>
>
> Investigate Elytron and gssproxy interoperability.
> https://fedorahosted.org/gss-proxy/
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years