[JBoss JIRA] (SWSQE-899) Create plan document
by Filip Brychta (Jira)
Filip Brychta created SWSQE-899:
-----------------------------------
Summary: Create plan document
Key: SWSQE-899
URL: https://issues.jboss.org/browse/SWSQE-899
Project: Kiali QE
Issue Type: Sub-task
Reporter: Filip Brychta
Assignee: Filip Brychta
Which steps we need to take before outage, after outage?
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 9 months
[JBoss JIRA] (WFWIP-167) EAP Operator handling ConfigMap internally
by Martin Choma (Jira)
[ https://issues.jboss.org/browse/WFWIP-167?page=com.atlassian.jira.plugin.... ]
Martin Choma commented on WFWIP-167:
------------------------------------
Isn't this externalizing configuration denying infrastucture immutability concept?
So far we have overriding feature in s2i [1], but that keep infrastructure immutability pattern. But current Operator does not have s2i support.
Also note as there will be only one Operator at a time, we cant remove StandaloneConfigMapSpec, when s2i will be part of operator. So now question is is it worth adding it in current state?
[1] https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_ap...
> EAP Operator handling ConfigMap internally
> ------------------------------------------
>
> Key: WFWIP-167
> URL: https://issues.jboss.org/browse/WFWIP-167
> Project: WildFly WIP
> Issue Type: Bug
> Components: OpenShift
> Reporter: Martin Choma
> Assignee: Jeff Mesnil
> Priority: Major
>
> If I understand description in [1] correctly. To specify custom standalone.xml I have to create ConfigMap with standalone.xml first and afterwards link operator to this ConfigMap.
> Is it possible to handle creation of ConfigMap and storing standalone.xml for me? Ideally I just specify file URI where custom standalone.xml is located. This location have to be accessible from operator pod. In this way we can look at it as hiding internals (implementation details) from users.
> Currently when user wants to change standalone.xml he does in ConfigMap, not operator. When changing standalone.xml through config map, I assume pod have to be restarted manually. Operator could do that for me.
> However this can be triggered by storing newer version of standalone.xml under another key, eg `standalone.xml.v2` and changing `StandaloneConfigMapSpec.key` in operator.
> What do you think? Have you considered this approach?
> [1] https://github.com/wildfly/wildfly-operator/blob/master/doc/apis.adoc#sta...
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 9 months
[JBoss JIRA] (JGRP-2362) Providing logical member name in JDBC_PING
by Bela Ban (Jira)
[ https://issues.jboss.org/browse/JGRP-2362?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-2362:
--------------------------------
I fail to understand what your problem is. Can you illustrate this with a setp-by-step example?
> Providing logical member name in JDBC_PING
> ------------------------------------------
>
> Key: JGRP-2362
> URL: https://issues.jboss.org/browse/JGRP-2362
> Project: JGroups
> Issue Type: Feature Request
> Affects Versions: 4.0.17, 4.0.18, 4.0.19, 4.1.0, 4.0.20
> Reporter: S Pokutniy
> Assignee: Bela Ban
> Priority: Minor
> Fix For: 4.1.2
>
>
> When using JDBC_PING and logical names instead of UUIDs and one of the cluster member crashes or get killed and this member is not coordinator then its database set still remains in the database as long as coordinator changes (independently from remove_old_coords_on_view_change /remove_all_data_on_view_change). If the the cluster is then restarted the old dataset makes connect() much slower (+30 seconds), as the members seem to be tryting to connect to it. Parameter remove_all_data_on_view_change seems to be the solution but it does not work as long as coordinator does not change, so practically the same as remove_old_coords_on_view_change.
> The only solution seems to be to provide an appropriate delete statement in parameter initialize_sql, which would delete old entry, for example like this: delete from JGROUPSPING where ping_data like '%logical name%'. However, this is neither really quick nor the ideal solution, as ping_data's datatype is bytea or bit varying.
> It would be great to have also logical name in JGROUPSPING, which is instead per default in insert(). This is also easy to implement as there is access to this information in PingData.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 9 months
[JBoss JIRA] (JGRP-2361) Error related to Jgroup and Database connection is getting reset
by Bela Ban (Jira)
[ https://issues.jboss.org/browse/JGRP-2361?page=com.atlassian.jira.plugin.... ]
Bela Ban edited comment on JGRP-2361 at 7/31/19 7:09 AM:
---------------------------------------------------------
Could be that you're running different versions of JGroups (cookie sent by /172.26.235.231:43565 does not match own cookie; terminating connection). What's your JGroups config (jgroups-tcp.xml)?
was (Author: belaban):
Could be that you're running different versions of JGroups (cookie sent by /172.26.235.231:43565 does not match own cookie; terminating connection). What's your JGroups config?
> Error related to Jgroup and Database connection is getting reset
> ----------------------------------------------------------------
>
> Key: JGRP-2361
> URL: https://issues.jboss.org/browse/JGRP-2361
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.6.11
> Environment: Hybris running on tomcat - Centos 7
> Reporter: karthikeyan Aruljothi
> Assignee: Bela Ban
> Priority: Major
> Attachments: Jgroup error in preprod-000.txt, Jgroups blocking and terminating connection.txt, Jgroups error in console.txt, error Jgroups.txt
>
>
> Hi ,
> we are facing an issue with our cluster configuration and due to this JVM responding time also takes more time, after clearing the cache / restarting all nodes application works as expected.
> When issue arises one of the core occupies 100% cpu utilization then it confirms to restart the server otherwise it never process any request. Below is our configuration in local.properties. Also providing error logs as attachment. could see error in logs related to Jgroups blocking and connection getting terminated between nodes.
> Let us know your valuable inputs, on what exactly the issue i.e causing the slowness then blocking the whole server.
> Attached cluster configuration for each nodes and error logs
> Adding to this we are getting below error while doing deployment/restarting of servers
> WARN [localhost-startStop-1] [GMS] hybrisnode-0: JOIN(hybrisnode-0) sent to hybrisnode-2 timed out (after 3000 ms), on try 3
> WARN [pool-3-thread-1] [GMS] hybrisnode-3: JOIN(hybrisnode-3) sent to hybrisnode-1 timed out (after 3000 ms), on try 4
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 9 months
[JBoss JIRA] (JGRP-2361) Error related to Jgroup and Database connection is getting reset
by Bela Ban (Jira)
[ https://issues.jboss.org/browse/JGRP-2361?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-2361:
--------------------------------
Could be that you're running different versions of JGroups (cookie sent by /172.26.235.231:43565 does not match own cookie; terminating connection). What's your JGroups config?
> Error related to Jgroup and Database connection is getting reset
> ----------------------------------------------------------------
>
> Key: JGRP-2361
> URL: https://issues.jboss.org/browse/JGRP-2361
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.6.11
> Environment: Hybris running on tomcat - Centos 7
> Reporter: karthikeyan Aruljothi
> Assignee: Bela Ban
> Priority: Major
> Attachments: Jgroup error in preprod-000.txt, Jgroups blocking and terminating connection.txt, Jgroups error in console.txt, error Jgroups.txt
>
>
> Hi ,
> we are facing an issue with our cluster configuration and due to this JVM responding time also takes more time, after clearing the cache / restarting all nodes application works as expected.
> When issue arises one of the core occupies 100% cpu utilization then it confirms to restart the server otherwise it never process any request. Below is our configuration in local.properties. Also providing error logs as attachment. could see error in logs related to Jgroups blocking and connection getting terminated between nodes.
> Let us know your valuable inputs, on what exactly the issue i.e causing the slowness then blocking the whole server.
> Attached cluster configuration for each nodes and error logs
> Adding to this we are getting below error while doing deployment/restarting of servers
> WARN [localhost-startStop-1] [GMS] hybrisnode-0: JOIN(hybrisnode-0) sent to hybrisnode-2 timed out (after 3000 ms), on try 3
> WARN [pool-3-thread-1] [GMS] hybrisnode-3: JOIN(hybrisnode-3) sent to hybrisnode-1 timed out (after 3000 ms), on try 4
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 9 months