[JBoss JIRA] (ISPN-7236) Administration console - management of remote sites is not working
by Roman Macor (JIRA)
Roman Macor created ISPN-7236:
---------------------------------
Summary: Administration console - management of remote sites is not working
Key: ISPN-7236
URL: https://issues.jboss.org/browse/ISPN-7236
Project: Infinispan
Issue Type: Bug
Components: JMX, reporting and management
Affects Versions: 9.0.0.Alpha4
Reporter: Roman Macor
Assignee: Vladimir Blagojevic
Attachments: domain.xml, host.xml
Start the server with the attached configuration.
Click on cache container -> MemcachedCache has a correct icon "Remotely backed up", but there should also be the name of the back up site - BRN in this case.
[~vblagojevic] was this removed on purpuse?
*click actions -> Manage back up sites -> clicking on any actions results in:*
Error There has been an error executing the operation: WFLYCTL0031: No operation named 'BRN' exists at address [ ("subsystem" => "datagrid-infinispan"), ("cache-container" => "clustered") ]
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 1 month
[JBoss JIRA] (ISPN-7235) Cross site replication fails if authentication is enabled
by Tristan Tarrant (JIRA)
Tristan Tarrant created ISPN-7235:
-------------------------------------
Summary: Cross site replication fails if authentication is enabled
Key: ISPN-7235
URL: https://issues.jboss.org/browse/ISPN-7235
Project: Infinispan
Issue Type: Bug
Components: Cross-Site Replication, Security
Affects Versions: 8.2.5.Final, 9.0.0.Alpha4
Reporter: Tristan Tarrant
Assignee: Tristan Tarrant
Fix For: 9.0.0.Beta1, 8.2.6.Final
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (Incoming-2,shared=tcp-global) ISPN000071: Caught exception when handling command SingleXSiteRpcCommand{command=ClearCommand{flags=null}}: java.lang.SecurityException: ISPN000287: Unauthorized access: subject 'null' lacks 'ADMIN' permission
at org.infinispan.security.impl.AuthorizationHelper.checkPermission(AuthorizationHelper.java:76)
at org.infinispan.security.impl.AuthorizationManagerImpl.checkPermission(AuthorizationManagerImpl.java:44)
at org.infinispan.security.impl.SecureCacheImpl.getCacheConfiguration(SecureCacheImpl.java:454)
at org.infinispan.xsite.BackupReceiverRepositoryImpl.createBackupReceiver(BackupReceiverRepositoryImpl.java:163)
at org.infinispan.xsite.BackupReceiverRepositoryImpl.getBackupReceiver(BackupReceiverRepositoryImpl.java:95)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromRemoteSite(CommandAwareRpcDispatcher.java:283)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:252)
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:460) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:377) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:250) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:675) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.blocks.mux.MuxUpHandler.up(MuxUpHandler.java:130) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.JChannel.up(JChannel.java:739) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1029) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.protocols.relay.RELAY2.deliver(RELAY2.java:618) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.protocols.relay.RELAY2.route(RELAY2.java:514) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.protocols.relay.RELAY2.handleMessage(RELAY2.java:489) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.protocols.relay.RELAY2.handleRelayMessage(RELAY2.java:470) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.protocols.relay.Relayer$Bridge.receive(Relayer.java:265) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.JChannel.up(JChannel.java:769) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1033) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.protocols.FRAG2.up(FRAG2.java:182) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.protocols.FlowControl.up(FlowControl.java:447) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.stack.Protocol.up(Protocol.java:420) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:294) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.protocols.UNICAST3.deliverBatch(UNICAST3.java:1087) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.protocols.UNICAST3.removeAndDeliver(UNICAST3.java:886) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:790) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:426) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:652) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:155) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:200) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:299) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.protocols.MERGE3.up(MERGE3.java:286) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.protocols.Discovery.up(Discovery.java:291) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.protocols.TP$ProtocolAdapter.up(TP.java:2842) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.protocols.TP.passMessageUp(TP.java:1577) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at org.jgroups.protocols.TP$MyHandler.run(TP.java:1796) [jgroups-3.6.3.Final-redhat-6.jar:3.6.3.Final-redhat-6]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_101]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_101]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_101]
~~~
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 1 month
[JBoss JIRA] (ISPN-6906) Reduce dependency on JBoss Marshalling
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-6906?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-6906:
-------------------------------
Status: Resolved (was: Pull Request Sent)
Fix Version/s: 9.0.0.Beta1
Resolution: Done
> Reduce dependency on JBoss Marshalling
> --------------------------------------
>
> Key: ISPN-6906
> URL: https://issues.jboss.org/browse/ISPN-6906
> Project: Infinispan
> Issue Type: Sub-task
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Fix For: 9.0.0.Beta1, 9.0.0.Final
>
>
> Since its inception Infinispan has been using JBoss Marshalling to deal with all the marshalling needs. With some tweaking (e.g. hooking a custom ObjectTable instance), the JBoss Marshalling based Infinispan externalizer layer is able to produce tiny binary payloads but it has some problems partly due to JBoss Marshalling itself and partly due to our own implementation details:
> JBoss Marshalling's objective has always been to try to produce a binary format that passes Java specification, but this is not a requirement for Infinispan. In fact, to reduce the payload size, Infinispan hooks at the ObjectTable level to produce minimal payload sizes.
> On top of the mismatch problems mentioned above, JBoss Marshalling’s programming model is based around creating a marshaller, writing to it, and then finishing using it by discarding its context (same applies to unmarshalling). The problem here is two-fold:
> * Both marshaller and unmarshaller are quite heavy objects, keeping context information such as references to instances appearing multiple times...etc, so constantly creating them is costly. So, to avoid wasting resources, we ended up adding thread locals that keep a number of marshaller/unmarshaller instances per thread (see ISPN-1815). These thread locals can potentially affect memory space (see user dev post).
> * The second problem is the need to support reentrant marshalling calls when storing data in binary format. The need for reentrancy appears in situations like this: Imagine you have to marshall a PutKV command, so you start a marshaller and write some stuff. Then, you have store the key and value, but these are binary so they have to be transformed into binary format, so again a marshaller needs to be created and key/value information written, finish with the marshaller and then write the bytes in the command itself. So, there needs to be a way to start two marshallers without having finished the first one. This is the reason why the changes added in ISPN-1815 resulted in the thread local keeping a number of marshaller/unmarshaller instances rather than a single one.
> Finally, for inter-node cluster communication and storing data in persistence layer, Infinispan is using JBoss Marshalling for both marshalling the types it knows about, e.g. internal data types, and types it does not know about, e.g. key and value types. This means that even if the marshaller is configurable, it’s not easy to switch to a different marshaller (see here for an example where we try to use a different marshaller). This problem is not present in Hot Rod Java clients since there JBoss Marshalling is purely used to marshall keys and values, so it’s very easy to test out a different marshaller.
> With all this in mind, the following change recommendations can be made:
> * For those types that we know about, marshall those manually in the most compact way possible. JBoss Marshalling codebase does a lot of these for encoding basic types (e.g. Strings, numbers)...etc, so we should be able to reuse them.
> * Only rely on 3rd party marshalling libraries for types we don’t know about, e.g. key and value types (If these key/value types happen to be primitives, or primitive derivations (e.g. arrays), we should be able to optimise those too. So, you only rely on 3rd party marshalling libraries for custom unknown types.). The benefit here is the we decouple Infinispan from using JBoss Marshalling all over the place, making it easier to try different marshalling mechanisms.
> * With JBoss Marshalling only used for unknown custom types, if the JBoss Marshalling marshaller implementation wants to use thread locals, that's fine, but then we effectively get rid of them except for custom types when JBoss Marshalling marshaller is used, plus we can switch/try different 3rd party marshallers which might be better suited.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 1 month
[JBoss JIRA] (ISPN-7219) Administration console - creating template from existing template fails in some cases
by Vladimir Blagojevic (JIRA)
[ https://issues.jboss.org/browse/ISPN-7219?page=com.atlassian.jira.plugin.... ]
Vladimir Blagojevic commented on ISPN-7219:
-------------------------------------------
[11:33am] remerson: I think I know the problem. We retrieve the structure of the existing template no problem, but when saving the issue is that the file-store=>FILE_STORE has not been created i.e. the "is-new-node" flag is missing hence an add operation is not added to the DMR steps
[11:35am] remerson: So it should be a case of adding a step in the Ctrl to recursively add "is-new-node" to all child nodes when copying from an existing template
[11:36am] remerson: Because we know that we are creating a new template, so it *shouldn't* cause an error with "blah existing node etc" from the server
> Administration console - creating template from existing template fails in some cases
> -------------------------------------------------------------------------------------
>
> Key: ISPN-7219
> URL: https://issues.jboss.org/browse/ISPN-7219
> Project: Infinispan
> Issue Type: Bug
> Components: JMX, reporting and management
> Affects Versions: 9.0.0.Alpha4
> Reporter: Roman Macor
> Assignee: Vladimir Blagojevic
>
> Click on cache container -> Configuration -> Templates -> create new template -> fill in template name: newTemplate, base configuration: persistent-file-store-write-behind (template with file store and write-behind configured) -> next -> create
> Result:
> Pop up with error message:
> {"WFLYCTL0062: Composite operation failed and was rolled back. Steps that failed:":{"Operation step-2":"WFLYCTL0216: Management resource '[\n (\"profile\" => \"clustered\"),\n (\"subsystem\" => \"datagrid-infinispan\"),\n (\"cache-container\" => \"clustered\"),\n (\"configurations\" => \"CONFIGURATIONS\"),\n (\"distributed-cache-configuration\" => \"new\"),\n (\"file-store\" => \"FILE_STORE\"),\n (\"write-behind\" => \"WRITE_BEHIND\")\n]' not found"}}
> The new template is created, but configuration is not copied.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 1 month