[JBoss JIRA] (ISPN-12116) Upgrade undertow to 2.0.30
by Pedro Ruivo (Jira)
Pedro Ruivo created ISPN-12116:
----------------------------------
Summary: Upgrade undertow to 2.0.30
Key: ISPN-12116
URL: https://issues.redhat.com/browse/ISPN-12116
Project: Infinispan
Issue Type: Component Upgrade
Affects Versions: 9.4.19.Final
Reporter: Pedro Ruivo
Assignee: Pedro Ruivo
Fix For: 9.4.20.Final
io.undertow:undertow-core=2.0.30.Final
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 5 months
[JBoss JIRA] (ISPN-12115) CacheContainerAdmin should allow defining configurations
by Dan Berindei (Jira)
Dan Berindei created ISPN-12115:
-----------------------------------
Summary: CacheContainerAdmin should allow defining configurations
Key: ISPN-12115
URL: https://issues.redhat.com/browse/ISPN-12115
Project: Infinispan
Issue Type: Feature Request
Components: Core, Hot Rod, Server
Affects Versions: 11.0.1.Final
Reporter: Dan Berindei
Fix For: 12.0.0.Final
{{EmbeddedCacheManagerAdmin}} and {{RemoteCacheManagerAdmin}} allow creating a cache with a complete configuration or with an existing configuration template.
It should be possible to define a configuration (template?) without a cache, so users/applications can define a common configuration and then reference it by name.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 5 months
[JBoss JIRA] (ISPN-12114) Server defaults and embeddedd defaults should be the same
by Dan Berindei (Jira)
Dan Berindei created ISPN-12114:
-----------------------------------
Summary: Server defaults and embeddedd defaults should be the same
Key: ISPN-12114
URL: https://issues.redhat.com/browse/ISPN-12114
Project: Infinispan
Issue Type: Bug
Components: Server
Affects Versions: 11.0.1.Final
Reporter: Dan Berindei
Assignee: Dan Berindei
Fix For: 12.0.0.Final
The server has an {{infinispan-defaults.xml}} file with some configuration templates that are available by default in the server and are automatically applied to all the caches defined in the server configuration XML.
These configuration templates change e.g. the lock acquisition timeout and state transfer timeout to match the old WildFly server's defaults, but these default values are not documented.
{{EmbeddedCacheManagerAdmin}} and {{RemoteCacheManagerAdmin}} also mention "the configuration marked as default on the container/server", but we don't really have that any more, as the default cache's configuration is no longer used as a template.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 5 months
[JBoss JIRA] (ISPN-12113) HTTP authentication with only Digest SHA mechanisms fails
by Tristan Tarrant (Jira)
[ https://issues.redhat.com/browse/ISPN-12113?page=com.atlassian.jira.plugi... ]
Tristan Tarrant updated ISPN-12113:
-----------------------------------
Description:
{code:xml}
<cache-container name="clustered" default-cache="sessionCache" statistics="true">
<transport stack="tcp-stack" site="z9" cluster="clustered" node-name="rhdgserver" />
<security>
<authorization>
<identity-role-mapper />
<role name="admin" permissions="ALL" />
<role name="reader" permissions="READ" />
<role name="writer" permissions="WRITE" />
</authorization>
</security>
<metrics gauges="true" histograms="true" />
</cache-container>
(...)
<endpoints socket-binding="default" security-realm="default">
<hotrod-connector name="hotrod">
<authentication>
<sasl server-name="rhdgserver" mechanisms="DIGEST-SHA-256" qop="auth" />
</authentication>
</hotrod-connector>
<rest-connector name="rest">
<authentication mechanisms="DIGEST-SHA-256"/>
</rest-connector>
</endpoints>
{code}
was:
Request: Make Rest protocol to work with cache authentication/encryption.
Description:
The REST protocol is not supported for use with cache authentication/authorization will result in a SecurityException and this happens also when using CLI.
According to Red Hat Data Grid 7 Server Guide (and following solution https://access.redhat.com/solutions/2947551)
Test:
The below configuration is expected to fail because one cannot use cache-container authentication <and> REST protocol.
{code:xml}
<cache-container name="clustered" default-cache="sessionCache" statistics="true">
<transport stack="tcp-stack" site="z9" cluster="clustered" node-name="rhdgserver" />
<security>
<authorization>
<identity-role-mapper />
<role name="admin" permissions="ALL" />
<role name="reader" permissions="READ" />
<role name="writer" permissions="WRITE" />
</authorization>
</security>
<metrics gauges="true" histograms="true" />
</cache-container>
(...)
<endpoints socket-binding="default" security-realm="default">
<hotrod-connector name="hotrod">
<authentication>
<sasl server-name="rhdgserver" mechanisms="DIGEST-SHA-256" qop="auth" />
</authentication>
</hotrod-connector>
<rest-connector name="rest">
<authentication mechanisms="DIGEST-SHA-256"/>
</rest-connector>
</endpoints>
{code}
Workaround:
Testing with one, or the other, works. As in:
{noformat}
#curl -u admin:admin http://localhost:11222/rest/v2/caches/test1
{"stats":{"hits":0,"current_number_of_entries_in_memory":0,"time_since_start":32,"time_since_reset":32,"current_number_of_entries":0,"total_number_of_entries":0,"off_heap_memory_used":0,"data_memory_used":0,"remove_hits":0,"remove_misses":0,"evictions":0,"average_read_time":0,"average_read_time_nanos":0,"average_write_time":0,"average_write_time_nanos":0,"average_remove_time":0,"average_remove_time_nanos":0,"required_minimum_number_of_nodes":1,"retrievals":0,"stores":0,"misses":0},"size":0,"configuration":{"distributed-cache":{"mode":"SYNC","remote-timeout":17500,"state-transfer":{"timeout":60000},"transaction":{"mode":"NONE"},"memory":{"object":{}},"locking":{"concurrency-level":1000,"acquire-timeout":15000,"striping":false},"statistics":true}},"rehash_in_progress":false,"bounded":false,"indexed":false,"persistent":false,"transactional":false,"secured":false,"has_remote_backup":false,"indexing_in_progress":false,"statistics":true}
{noformat}
> HTTP authentication with only Digest SHA mechanisms fails
> ---------------------------------------------------------
>
> Key: ISPN-12113
> URL: https://issues.redhat.com/browse/ISPN-12113
> Project: Infinispan
> Issue Type: Bug
> Reporter: Francisco De Melo Junior
> Assignee: Francisco De Melo Junior
> Priority: Minor
> Labels: authentication, encryption, rest
> Fix For: 12.0.0.Final
>
>
> {code:xml}
> <cache-container name="clustered" default-cache="sessionCache" statistics="true">
> <transport stack="tcp-stack" site="z9" cluster="clustered" node-name="rhdgserver" />
> <security>
> <authorization>
> <identity-role-mapper />
> <role name="admin" permissions="ALL" />
> <role name="reader" permissions="READ" />
> <role name="writer" permissions="WRITE" />
> </authorization>
> </security>
> <metrics gauges="true" histograms="true" />
> </cache-container>
> (...)
> <endpoints socket-binding="default" security-realm="default">
> <hotrod-connector name="hotrod">
> <authentication>
> <sasl server-name="rhdgserver" mechanisms="DIGEST-SHA-256" qop="auth" />
> </authentication>
> </hotrod-connector>
> <rest-connector name="rest">
> <authentication mechanisms="DIGEST-SHA-256"/>
> </rest-connector>
> </endpoints>
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 5 months
[JBoss JIRA] (ISPN-12113) HTTP authentication with only Digest SHA mechanisms fails
by Tristan Tarrant (Jira)
[ https://issues.redhat.com/browse/ISPN-12113?page=com.atlassian.jira.plugi... ]
Tristan Tarrant updated ISPN-12113:
-----------------------------------
Security: (was: Red Hat Internal)
> HTTP authentication with only Digest SHA mechanisms fails
> ---------------------------------------------------------
>
> Key: ISPN-12113
> URL: https://issues.redhat.com/browse/ISPN-12113
> Project: Infinispan
> Issue Type: Bug
> Reporter: Francisco De Melo Junior
> Assignee: Francisco De Melo Junior
> Priority: Minor
> Labels: authentication, encryption, rest
> Fix For: 12.0.0.Final
>
>
> Request: Make Rest protocol to work with cache authentication/encryption.
> Description:
> The REST protocol is not supported for use with cache authentication/authorization will result in a SecurityException and this happens also when using CLI.
> According to Red Hat Data Grid 7 Server Guide (and following solution https://access.redhat.com/solutions/2947551)
> Test:
> The below configuration is expected to fail because one cannot use cache-container authentication <and> REST protocol.
> {code:xml}
> <cache-container name="clustered" default-cache="sessionCache" statistics="true">
> <transport stack="tcp-stack" site="z9" cluster="clustered" node-name="rhdgserver" />
> <security>
> <authorization>
> <identity-role-mapper />
> <role name="admin" permissions="ALL" />
> <role name="reader" permissions="READ" />
> <role name="writer" permissions="WRITE" />
> </authorization>
> </security>
> <metrics gauges="true" histograms="true" />
> </cache-container>
> (...)
> <endpoints socket-binding="default" security-realm="default">
> <hotrod-connector name="hotrod">
> <authentication>
> <sasl server-name="rhdgserver" mechanisms="DIGEST-SHA-256" qop="auth" />
> </authentication>
> </hotrod-connector>
> <rest-connector name="rest">
> <authentication mechanisms="DIGEST-SHA-256"/>
> </rest-connector>
> </endpoints>
> {code}
> Workaround:
> Testing with one, or the other, works. As in:
> {noformat}
> #curl -u admin:admin http://localhost:11222/rest/v2/caches/test1
> {"stats":{"hits":0,"current_number_of_entries_in_memory":0,"time_since_start":32,"time_since_reset":32,"current_number_of_entries":0,"total_number_of_entries":0,"off_heap_memory_used":0,"data_memory_used":0,"remove_hits":0,"remove_misses":0,"evictions":0,"average_read_time":0,"average_read_time_nanos":0,"average_write_time":0,"average_write_time_nanos":0,"average_remove_time":0,"average_remove_time_nanos":0,"required_minimum_number_of_nodes":1,"retrievals":0,"stores":0,"misses":0},"size":0,"configuration":{"distributed-cache":{"mode":"SYNC","remote-timeout":17500,"state-transfer":{"timeout":60000},"transaction":{"mode":"NONE"},"memory":{"object":{}},"locking":{"concurrency-level":1000,"acquire-timeout":15000,"striping":false},"statistics":true}},"rehash_in_progress":false,"bounded":false,"indexed":false,"persistent":false,"transactional":false,"secured":false,"has_remote_backup":false,"indexing_in_progress":false,"statistics":true}
> {noformat}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 5 months
[JBoss JIRA] (ISPN-12109) TcpConnection.Receiver.run() blocking call
by Dan Berindei (Jira)
[ https://issues.redhat.com/browse/ISPN-12109?page=com.atlassian.jira.plugi... ]
Dan Berindei reassigned ISPN-12109:
-----------------------------------
Assignee: Dan Berindei (was: Will Burns)
> TcpConnection.Receiver.run() blocking call
> ------------------------------------------
>
> Key: ISPN-12109
> URL: https://issues.redhat.com/browse/ISPN-12109
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Test Suite
> Affects Versions: 11.0.1.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Major
> Fix For: 12.0.0.Final, 11.0.2.Final
>
>
> {noformat}
> [TestSuiteProgress] Test failed: org.infinispan.distribution.rehash.WorkDuringJoinTest[DIST_SYNC, tx=false].BlockingChecker
> 22:28:37.967 [Connection.Receiver [127.0.0.1:34169 - 127.0.0.1:8001]-8,WorkDuringJoinTest-NodeC] ERROR org.infinispan.commons.test.TestSuiteProgress - Test failed: org.infinispan.distribution.rehash.WorkDuringJoinTest[DIST_SYNC, tx=false].BlockingChecker
> java.lang.AssertionError: Blocking call! java.net.SocketInputStream#socketRead0 on thread Thread[Connection.Receiver [127.0.0.1:34169 - 127.0.0.1:8001]-8,WorkDuringJoinTest-NodeC,5,ISPN-non-blocking-thread-group]
> at org.infinispan.util.CoreTestBlockHoundIntegration.lambda$applyTo$0(CoreTestBlockHoundIntegration.java:45) ~[test-classes/:?]
> at reactor.blockhound.BlockHound$Builder.lambda$install$8(BlockHound.java:383) ~[blockhound-1.0.3.RELEASE.jar:?]
> at reactor.blockhound.BlockHoundRuntime.checkBlocking(BlockHoundRuntime.java:89) [?:?]
> at java.net.SocketInputStream.socketRead0(SocketInputStream.java) [?:?]
> at java.net.SocketInputStream.socketRead(SocketInputStream.java:115) [?:?]
> at java.net.SocketInputStream.read(SocketInputStream.java:168) [?:?]
> at java.net.SocketInputStream.read(SocketInputStream.java:140) [?:?]
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:252) [?:?]
> at java.io.BufferedInputStream.read(BufferedInputStream.java:271) [?:?]
> at java.io.DataInputStream.readInt(DataInputStream.java:392) [?:?]
> at org.jgroups.blocks.cs.TcpConnection$Receiver.run(TcpConnection.java:301) [jgroups-4.2.1.Final.jar:4.2.1.Final]
> at java.lang.Thread.run(Thread.java:834) [?:?]
> {noformat}
> The blocking call doesn't always happen, but it appears to be more common in the unstable CI builds:
> https://ci.infinispan.org/job/InfinispanAlternateBuilds/job/InfinispanUns...
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 5 months
[JBoss JIRA] (ISPN-12097) Invalidation Cache with a shared store doesn't work properly after new SPI changes
by Will Burns (Jira)
[ https://issues.redhat.com/browse/ISPN-12097?page=com.atlassian.jira.plugi... ]
Will Burns updated ISPN-12097:
------------------------------
Fix Version/s: 12.0.0.Final
11.0.2.Final
> Invalidation Cache with a shared store doesn't work properly after new SPI changes
> ----------------------------------------------------------------------------------
>
> Key: ISPN-12097
> URL: https://issues.redhat.com/browse/ISPN-12097
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Loaders and Stores
> Affects Versions: 11.0.1.Final
> Reporter: Paul Ferraro
> Assignee: Will Burns
> Priority: Blocker
> Fix For: 12.0.0.Final, 11.0.2.Final
>
> Attachments: Test.java
>
>
> There seems to be something amiss with the new NonBlockingStore changes. When a transactional invalidation cache is used with a shared cache store, I've observed the entries published to the removePublisher of the NonBlockStore.batch(...) which should have targeted the writePublisher. This seems to happen when a batch only contains writes, but no removes.
> See the attached test to reproduce the issue, which executes two simple cache operations against a transactional vs non-transactional cache using a shared write-through store. The transactional version fails due to unexpected removals triggered by the batch(...) method (which, in the case of the JDBC store delegates to the deleteBatch(...) and bulkUpdate(...) methods. TRACE logging indicates that entries are unexpectedly published to the removePublisher of batch(...) when transactions are enabled causing entries to be removed unexpectedly from the store (as the result of a Cache.put(...)). When tx are disabled, the batch(...) method is, of course, not in play, and everything works correctly via the individual write/delete methods.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 5 months