[JBoss JIRA] (WFWIP-221) jps can't see server process in Openshift
by Jan Blizňák (Jira)
[ https://issues.jboss.org/browse/WFWIP-221?page=com.atlassian.jira.plugin.... ]
Jan Blizňák commented on WFWIP-221:
-----------------------------------
Additional info - jps can't see even itself if that is of any help.
I googled this: https://stackoverflow.com/questions/3805376/jps-returns-no-output-even-wh...
This led me to test the content of /tmp and indeed there is difference:
released CD image 17.0.5:
{code:java}
$ id
uid=1000290000(jboss) gid=0(root) groups=0(root),185(jboss),1000290000
$ ls -l /tmp
total 4
drwxr-xr-x. 2 jboss root 17 Oct 1 15:23 hsperfdata_jboss
drwxr-xr-x. 2 root root 16 Sep 6 14:42 hsperfdata_root
-rwx------. 1 root root 1379 Jul 23 16:18 ks-script-4c1pmxe1
{code}
eap-cd-openshift-rhel8:18.0-EAP7-1216:
{code:java}
$ id
uid=1000290000(jboss) gid=0(root) groups=0(root),185(jboss),1000290000
$ ls -l /tmp
total 12
-rw-r--r--. 1 jboss root 93 Oct 1 15:26 cli-script-property-1569943601.cli
-rw-r--r--. 1 jboss root 410 Oct 1 15:26 ds-timer-service-data-store
drwxr-xr-x. 2 185 root 6 Sep 30 17:01 hsperfdata_jboss
drwxr-xr-x. 2 root root 16 Sep 30 16:59 hsperfdata_root
-rwx------. 1 root root 1379 Sep 16 12:25 ks-script-mr2oi4id
{code}
See the difference at hsperfdata_jboss user, maybe that can help. I don't know why jboss user has different.
> jps can't see server process in Openshift
> -----------------------------------------
>
> Key: WFWIP-221
> URL: https://issues.jboss.org/browse/WFWIP-221
> Project: WildFly WIP
> Issue Type: Bug
> Components: OpenShift
> Reporter: Jan Blizňák
> Assignee: Jean Francois Denise
> Priority: Critical
>
> This is somewhat curious issue.
> When pod of app image is deployed (server is started) and switch to terminal in Openshift GUI, doing `jps -l` does nothing although I can see in logs that server is up and I can find java process in /proc/*/cmdline (image has no `ps` utility).
> This happens only in Openshift environment and it is a regression to what was working with latest published CD image.
> Funnily, when I try to run the image in docker and start the server by /opt/eap/bin/openshift-launch.sh, then jps can see the java process.
> {code:java}
> # this works fine
> docker run -it docker-registry.upshift.redhat.com/kwills/eap-cd-openshift-rhel8:18.0-EAP... sh
> sh-4.4$ /opt/eap/bin/openshift-launch.sh &
> sh-4.4$ jps -l
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (WFWIP-211) emptyDir.sizeLimit not propagated to Pod Spec
by Martin Choma (Jira)
[ https://issues.jboss.org/browse/WFWIP-211?page=com.atlassian.jira.plugin.... ]
Martin Choma commented on WFWIP-211:
------------------------------------
It is really ignored. I am trying on OpenShift with wildfly-operator. If I add `sizeLimit: 2Mi` directly to StatefulSet it is throw away on Save. This behaves the same if I add any unknow parameter e.g. `a: b`
I will create OpenShift issue for that to be explained.
> emptyDir.sizeLimit not propagated to Pod Spec
> ---------------------------------------------
>
> Key: WFWIP-211
> URL: https://issues.jboss.org/browse/WFWIP-211
> Project: WildFly WIP
> Issue Type: Bug
> Components: OpenShift
> Reporter: Martin Choma
> Assignee: Jeff Mesnil
> Priority: Blocker
> Labels: operator
>
> # {code:yaml}
> apiVersion: wildfly.org/v1alpha1
> kind: WildFlyServer
> metadata:
> name: operator-empty-dir
> namespace: mchoma
> spec:
> applicationImage: 'registry.access.redhat.com/jboss-eap-7/eap72-openshift:1.1'
> size: 1
> storage:
> emptyDir:
> medium: Memory
> sizeLimit: 1Mi
> {code}
> # wait until pod are started and look into pod yaml definition "1 Mi" is not there
> {code:yaml}
> ...
> serviceAccount: default
> serviceAccountName: default
> subdomain: operator-empty-dir-headless
> terminationGracePeriodSeconds: 30
> volumes:
> - emptyDir:
> medium: Memory
> name: operator-empty-dir-volume
> - name: default-token-j2grg
> secret:
> defaultMode: 420
> secretName: default-token-j2grg
> status:
> conditions:
> ...
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (WFWIP-187) Changes to PVC are not reflected in Operator
by Martin Choma (Jira)
[ https://issues.jboss.org/browse/WFWIP-187?page=com.atlassian.jira.plugin.... ]
Martin Choma updated WFWIP-187:
-------------------------------
Comment: was deleted
(was: It is really ignored. I am trying on OpenShift with wildfly-operator. If I add `sizeLimit: 2Mi` directly to StatefulSet it is throw away on Save. This behaves the same if I add any unknow parameter e.g. `a: b`
I will create OpenShift issue for that to be explained.)
> Changes to PVC are not reflected in Operator
> --------------------------------------------
>
> Key: WFWIP-187
> URL: https://issues.jboss.org/browse/WFWIP-187
> Project: WildFly WIP
> Issue Type: Bug
> Components: OpenShift
> Reporter: Martin Choma
> Assignee: Jeff Mesnil
> Priority: Blocker
> Labels: operator
>
> Any chnages (adding, removing or updating) made to PVC after WildFlyServer CR was created are not reflected in underlying PVC kubernetes object.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (WFWIP-206) scale down isn't successful when there are in-doubt transaction on pod
by Martin Simka (Jira)
[ https://issues.jboss.org/browse/WFWIP-206?page=com.atlassian.jira.plugin.... ]
Martin Simka edited comment on WFWIP-206 at 10/1/19 11:37 AM:
--------------------------------------------------------------
update:
* failure in {{testTxStatelessServerSecondCommitThrowRmFail}} - fails, I created WFWIP-218
* {{testTxStatelessClientSecondCommitThrowRmFail}} passes
* {{testTxStatelessServerSecondPrepareJvmHalt}} passes
* {{testTxStatelessClientSecondPrepareJvmHalt}} fails, but I'm not sure if it's expectations are correct, anyway I created WFWIP-222
was (Author: simkam):
update:
* failure in {{testTxStatelessServerSecondCommitThrowRmFail}} - fails, I created WFWIP-218
* {{testTxStatelessClientSecondCommitThrowRmFail}} passes
* {{testTxStatelessServerSecondPrepareJvmHalt}} passes
* {{testTxStatelessClientSecondPrepareJvmHalt}} pending retest
> scale down isn't successful when there are in-doubt transaction on pod
> ----------------------------------------------------------------------
>
> Key: WFWIP-206
> URL: https://issues.jboss.org/browse/WFWIP-206
> Project: WildFly WIP
> Issue Type: Bug
> Components: OpenShift
> Environment: operator, built from [1984d98154f11a7473be065e4ae7f54b1812c9b0|https://github.com/wildfly/wildf...] :
> {noformat}
> docker-registry.engineering.redhat.com/jbossqe-eap/wildfly-operator:latest
> {noformat}
> EAP image:
> {noformat}
> docker-registry.engineering.redhat.com/ochaloup/wildfly18-snapshot:190909...
> {noformat}
> Reporter: Martin Simka
> Assignee: Ondrej Chaloupka
> Priority: Blocker
> Labels: operator
>
> While testing tx recovery in OpenShift I see that scale down of pod that has transaction in-doubt on it isn't successful
> Scenario:
> *ejb client* (app tx-client, pod tx-client-0):
> * EJB business method
> ** lookup remote EJB
> ** enlist XA resource 1 to transaction
> ** enlist XA resource 2 to transaction
> ** call remote EJB
> *ejb server* (app tx-server, pod tx-server-0):
> * EJB business method
> ** enlist XA resource 1 to transaction
> ** enlist XA resource 2 to transaction
> ejb server XA resource 2 fails with {{XAException(XAException.XAER_RMFAIL)}}
> Then the test calls scale down (size from 1 to 0) on tx-server pod. But scale down never completes.
> Log from operator:
> {noformat}
> {"level":"info","ts":1568905676.6303623,"logger":"controller_wildflyserver","msg":"Transaction recovery scaledown processing","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Pod Name":"tx-server-0","IP Address":"172.17.0.10"}
> {"level":"info","ts":1568905676.7313502,"logger":"controller_wildflyserver","msg":"Enabling recovery listener for processing scaledown at tx-server-0","Request.Namespace":"msimka-namespace","Request.Name":"tx-server"}
> {"level":"info","ts":1568905679.4325309,"logger":"controller_wildflyserver","msg":"Query to find the transaction recovery port to force scan at pod tx-server-0","Request.Namespace":"msimka-namespace","Request.Name":"tx-server"}
> {"level":"info","ts":1568905686.7914035,"logger":"controller_wildflyserver","msg":"Executing recovery scan at tx-server-0","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Pod IP":"172.17.0.10","Recovery port":4712}
> {"level":"info","ts":1568905702.0583296,"logger":"controller_wildflyserver","msg":"In-doubt transactions in object store","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Pod Name":"tx-server-0","Message":"Recovery scan to be invoked as the transaction log storage is not empty for pod scaling down pod tx-server-0, transaction list: map[0:ffffac11000a:991c183:5d8399a1:13:map[age-in-seconds:23 id:0:ffffac11000a:991c183:5d8399a1:13 jmx-name:<nil> participants:map[java:/MockXAResource:map[eis-product-name:MockXAResource Test eis-product-version:0.1.Mock jmx-name:<nil> jndi-name:java:/MockXAResource status:PREPARED type:/StateManager/AbstractRecord/XAResourceRecord]] type:StateManager/BasicAction/TwoPhaseCoordinator/AtomicAction/SubordinateAtomicAction/JCA]]"}
> {"level":"info","ts":1568905711.1034026,"logger":"controller_wildflyserver","msg":"Reconciling WildFlyServer","Request.Namespace":"msimka-namespace","Request.Name":"tx-server"}
> {"level":"info","ts":1568905711.103548,"logger":"controller_wildflyserver","msg":"Transaction recovery scaledown processing","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Pod Name":"tx-server-0","IP Address":"172.17.0.10"}
> {"level":"info","ts":1568905711.109706,"logger":"controller_wildflyserver","msg":"Executing recovery scan at tx-server-0","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Pod IP":"172.17.0.10","Recovery port":4712}
> {"level":"error","ts":1568905711.2608829,"logger":"controller_wildflyserver","msg":"Failures during scaling down recovery processing","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Desired replica size":0,"Number of pods to be removed":1,"error":"Found 1 errors:\n [[Failed to run transaction recovery scan for scaling down pod tx-server-0. Please, verify the pod log file. Error: Error to get response for command SCAN sending to 172.17.0.10:4712, error: EOF]],","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-l..."}
> {"level":"info","ts":1568905711.2609518,"logger":"controller_wildflyserver","msg":"Scaling down statefulset by verification if pods are clean by recovery","StatefulSet.Namespace":"msimka-namespace","StatefulSet.Name":"tx-server"}
> {"level":"info","ts":1568905711.2609615,"logger":"controller_wildflyserver","msg":"Statefulset was not fully scaled to the desired replica size 0 while StatefulSet is to be at size 1. Some pods were not cleaned by recovery. Verify status of the WildflyServer tx-server","StatefulSet.Namespace":"msimka-namespace","StatefulSet.Name":"tx-server"}
> {"level":"info","ts":1568905711.2795022,"logger":"controller_wildflyserver","msg":"Updating StatefulSet to be up to date with the WildFlyServer Spec","StatefulSet.Namespace":"msimka-namespace","StatefulSet.Name":"tx-server"}
> {"level":"info","ts":1568905711.2795491,"logger":"controller_wildflyserver","msg":"Reconciling WildFlyServer","Request.Namespace":"msimka-namespace","Request.Name":"tx-server"}
> {"level":"info","ts":1568905711.2796504,"logger":"controller_wildflyserver","msg":"Transaction recovery scaledown processing","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Pod Name":"tx-server-0","IP Address":"172.17.0.10"}
> {"level":"info","ts":1568905711.2937052,"logger":"controller_wildflyserver","msg":"Executing recovery scan at tx-server-0","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Pod IP":"172.17.0.10","Recovery port":4712}
> {"level":"error","ts":1568905711.294249,"logger":"controller_wildflyserver","msg":"Failures during scaling down recovery processing","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Desired replica size":0,"Number of pods to be removed":1,"error":"Found 1 errors:\n [[Failed to run transaction recovery scan for scaling down pod tx-server-0. Please, verify the pod log file. Error: Cannot process TCP connection to 172.17.0.10:4712, error: dial tcp 172.17.0.10:4712: connect: connection refused]],","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-l..."}
> {"level":"info","ts":1568905711.294342,"logger":"controller_wildflyserver","msg":"Scaling down statefulset by verification if pods are clean by recovery","StatefulSet.Namespace":"msimka-namespace","StatefulSet.Name":"tx-server"}
> {"level":"info","ts":1568905711.294417,"logger":"controller_wildflyserver","msg":"Statefulset was not fully scaled to the desired replica size 0 while StatefulSet is to be at size 1. Some pods were not cleaned by recovery. Verify status of the WildflyServer tx-server","StatefulSet.Namespace":"msimka-namespace","StatefulSet.Name":"tx-server"}
> {"level":"error","ts":1568905711.311673,"logger":"controller_wildflyserver","msg":"Failed to Update StatefulSet.","StatefulSet.Namespace":"msimka-namespace","StatefulSet.Name":"tx-server","error":"Operation cannot be fulfilled on statefulsets.apps \"tx-server\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-l..."}
> {"level":"error","ts":1568905711.311745,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"wildflyserver-controller","request":"msimka-namespace/tx-server","error":"Operation cannot be fulfilled on statefulsets.apps \"tx-server\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-l..."}
> {"level":"info","ts":1568905712.3137681,"logger":"controller_wildflyserver","msg":"Reconciling WildFlyServer","Request.Namespace":"msimka-namespace","Request.Name":"tx-server"}
> {"level":"info","ts":1568905712.3139439,"logger":"controller_wildflyserver","msg":"Transaction recovery scaledown processing","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Pod Name":"tx-server-0","IP Address":"172.17.0.10"}
> {"level":"info","ts":1568905712.3253288,"logger":"controller_wildflyserver","msg":"Executing recovery scan at tx-server-0","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Pod IP":"172.17.0.10","Recovery port":4712}
> {"level":"error","ts":1568905712.3255754,"logger":"controller_wildflyserver","msg":"Failures during scaling down recovery processing","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Desired replica size":0,"Number of pods to be removed":1,"error":"Found 1 errors:\n [[Failed to run transaction recovery scan for scaling down pod tx-server-0. Please, verify the pod log file. Error: Cannot process TCP connection to 172.17.0.10:4712, error: dial tcp 172.17.0.10:4712: connect: connection refused]],","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-l..."}
> {"level":"info","ts":1568905712.3256311,"logger":"controller_wildflyserver","msg":"Scaling down statefulset by verification if pods are clean by recovery","StatefulSet.Namespace":"msimka-namespace","StatefulSet.Name":"tx-server"}
> {"level":"info","ts":1568905712.3256419,"logger":"controller_wildflyserver","msg":"Statefulset was not fully scaled to the desired replica size 0 while StatefulSet is to be at size 1. Some pods were not cleaned by recovery. Verify status of the WildflyServer tx-server","StatefulSet.Namespace":"msimka-namespace","StatefulSet.Name":"tx-server"}
> {noformat}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (WFWIP-206) scale down isn't successful when there are in-doubt transaction on pod
by Martin Simka (Jira)
[ https://issues.jboss.org/browse/WFWIP-206?page=com.atlassian.jira.plugin.... ]
Martin Simka edited comment on WFWIP-206 at 10/1/19 11:37 AM:
--------------------------------------------------------------
update:
* failure in {{testTxStatelessServerSecondCommitThrowRmFail}} - fails, I created WFWIP-218
* {{testTxStatelessClientSecondCommitThrowRmFail}} passes
* {{testTxStatelessServerSecondPrepareJvmHalt}} passes
* {{testTxStatelessClientSecondPrepareJvmHalt}} fails, but I'm not sure if its expectations are correct, anyway I created WFWIP-222
was (Author: simkam):
update:
* failure in {{testTxStatelessServerSecondCommitThrowRmFail}} - fails, I created WFWIP-218
* {{testTxStatelessClientSecondCommitThrowRmFail}} passes
* {{testTxStatelessServerSecondPrepareJvmHalt}} passes
* {{testTxStatelessClientSecondPrepareJvmHalt}} fails, but I'm not sure if it's expectations are correct, anyway I created WFWIP-222
> scale down isn't successful when there are in-doubt transaction on pod
> ----------------------------------------------------------------------
>
> Key: WFWIP-206
> URL: https://issues.jboss.org/browse/WFWIP-206
> Project: WildFly WIP
> Issue Type: Bug
> Components: OpenShift
> Environment: operator, built from [1984d98154f11a7473be065e4ae7f54b1812c9b0|https://github.com/wildfly/wildf...] :
> {noformat}
> docker-registry.engineering.redhat.com/jbossqe-eap/wildfly-operator:latest
> {noformat}
> EAP image:
> {noformat}
> docker-registry.engineering.redhat.com/ochaloup/wildfly18-snapshot:190909...
> {noformat}
> Reporter: Martin Simka
> Assignee: Ondrej Chaloupka
> Priority: Blocker
> Labels: operator
>
> While testing tx recovery in OpenShift I see that scale down of pod that has transaction in-doubt on it isn't successful
> Scenario:
> *ejb client* (app tx-client, pod tx-client-0):
> * EJB business method
> ** lookup remote EJB
> ** enlist XA resource 1 to transaction
> ** enlist XA resource 2 to transaction
> ** call remote EJB
> *ejb server* (app tx-server, pod tx-server-0):
> * EJB business method
> ** enlist XA resource 1 to transaction
> ** enlist XA resource 2 to transaction
> ejb server XA resource 2 fails with {{XAException(XAException.XAER_RMFAIL)}}
> Then the test calls scale down (size from 1 to 0) on tx-server pod. But scale down never completes.
> Log from operator:
> {noformat}
> {"level":"info","ts":1568905676.6303623,"logger":"controller_wildflyserver","msg":"Transaction recovery scaledown processing","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Pod Name":"tx-server-0","IP Address":"172.17.0.10"}
> {"level":"info","ts":1568905676.7313502,"logger":"controller_wildflyserver","msg":"Enabling recovery listener for processing scaledown at tx-server-0","Request.Namespace":"msimka-namespace","Request.Name":"tx-server"}
> {"level":"info","ts":1568905679.4325309,"logger":"controller_wildflyserver","msg":"Query to find the transaction recovery port to force scan at pod tx-server-0","Request.Namespace":"msimka-namespace","Request.Name":"tx-server"}
> {"level":"info","ts":1568905686.7914035,"logger":"controller_wildflyserver","msg":"Executing recovery scan at tx-server-0","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Pod IP":"172.17.0.10","Recovery port":4712}
> {"level":"info","ts":1568905702.0583296,"logger":"controller_wildflyserver","msg":"In-doubt transactions in object store","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Pod Name":"tx-server-0","Message":"Recovery scan to be invoked as the transaction log storage is not empty for pod scaling down pod tx-server-0, transaction list: map[0:ffffac11000a:991c183:5d8399a1:13:map[age-in-seconds:23 id:0:ffffac11000a:991c183:5d8399a1:13 jmx-name:<nil> participants:map[java:/MockXAResource:map[eis-product-name:MockXAResource Test eis-product-version:0.1.Mock jmx-name:<nil> jndi-name:java:/MockXAResource status:PREPARED type:/StateManager/AbstractRecord/XAResourceRecord]] type:StateManager/BasicAction/TwoPhaseCoordinator/AtomicAction/SubordinateAtomicAction/JCA]]"}
> {"level":"info","ts":1568905711.1034026,"logger":"controller_wildflyserver","msg":"Reconciling WildFlyServer","Request.Namespace":"msimka-namespace","Request.Name":"tx-server"}
> {"level":"info","ts":1568905711.103548,"logger":"controller_wildflyserver","msg":"Transaction recovery scaledown processing","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Pod Name":"tx-server-0","IP Address":"172.17.0.10"}
> {"level":"info","ts":1568905711.109706,"logger":"controller_wildflyserver","msg":"Executing recovery scan at tx-server-0","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Pod IP":"172.17.0.10","Recovery port":4712}
> {"level":"error","ts":1568905711.2608829,"logger":"controller_wildflyserver","msg":"Failures during scaling down recovery processing","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Desired replica size":0,"Number of pods to be removed":1,"error":"Found 1 errors:\n [[Failed to run transaction recovery scan for scaling down pod tx-server-0. Please, verify the pod log file. Error: Error to get response for command SCAN sending to 172.17.0.10:4712, error: EOF]],","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-l..."}
> {"level":"info","ts":1568905711.2609518,"logger":"controller_wildflyserver","msg":"Scaling down statefulset by verification if pods are clean by recovery","StatefulSet.Namespace":"msimka-namespace","StatefulSet.Name":"tx-server"}
> {"level":"info","ts":1568905711.2609615,"logger":"controller_wildflyserver","msg":"Statefulset was not fully scaled to the desired replica size 0 while StatefulSet is to be at size 1. Some pods were not cleaned by recovery. Verify status of the WildflyServer tx-server","StatefulSet.Namespace":"msimka-namespace","StatefulSet.Name":"tx-server"}
> {"level":"info","ts":1568905711.2795022,"logger":"controller_wildflyserver","msg":"Updating StatefulSet to be up to date with the WildFlyServer Spec","StatefulSet.Namespace":"msimka-namespace","StatefulSet.Name":"tx-server"}
> {"level":"info","ts":1568905711.2795491,"logger":"controller_wildflyserver","msg":"Reconciling WildFlyServer","Request.Namespace":"msimka-namespace","Request.Name":"tx-server"}
> {"level":"info","ts":1568905711.2796504,"logger":"controller_wildflyserver","msg":"Transaction recovery scaledown processing","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Pod Name":"tx-server-0","IP Address":"172.17.0.10"}
> {"level":"info","ts":1568905711.2937052,"logger":"controller_wildflyserver","msg":"Executing recovery scan at tx-server-0","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Pod IP":"172.17.0.10","Recovery port":4712}
> {"level":"error","ts":1568905711.294249,"logger":"controller_wildflyserver","msg":"Failures during scaling down recovery processing","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Desired replica size":0,"Number of pods to be removed":1,"error":"Found 1 errors:\n [[Failed to run transaction recovery scan for scaling down pod tx-server-0. Please, verify the pod log file. Error: Cannot process TCP connection to 172.17.0.10:4712, error: dial tcp 172.17.0.10:4712: connect: connection refused]],","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-l..."}
> {"level":"info","ts":1568905711.294342,"logger":"controller_wildflyserver","msg":"Scaling down statefulset by verification if pods are clean by recovery","StatefulSet.Namespace":"msimka-namespace","StatefulSet.Name":"tx-server"}
> {"level":"info","ts":1568905711.294417,"logger":"controller_wildflyserver","msg":"Statefulset was not fully scaled to the desired replica size 0 while StatefulSet is to be at size 1. Some pods were not cleaned by recovery. Verify status of the WildflyServer tx-server","StatefulSet.Namespace":"msimka-namespace","StatefulSet.Name":"tx-server"}
> {"level":"error","ts":1568905711.311673,"logger":"controller_wildflyserver","msg":"Failed to Update StatefulSet.","StatefulSet.Namespace":"msimka-namespace","StatefulSet.Name":"tx-server","error":"Operation cannot be fulfilled on statefulsets.apps \"tx-server\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-l..."}
> {"level":"error","ts":1568905711.311745,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"wildflyserver-controller","request":"msimka-namespace/tx-server","error":"Operation cannot be fulfilled on statefulsets.apps \"tx-server\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-l..."}
> {"level":"info","ts":1568905712.3137681,"logger":"controller_wildflyserver","msg":"Reconciling WildFlyServer","Request.Namespace":"msimka-namespace","Request.Name":"tx-server"}
> {"level":"info","ts":1568905712.3139439,"logger":"controller_wildflyserver","msg":"Transaction recovery scaledown processing","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Pod Name":"tx-server-0","IP Address":"172.17.0.10"}
> {"level":"info","ts":1568905712.3253288,"logger":"controller_wildflyserver","msg":"Executing recovery scan at tx-server-0","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Pod IP":"172.17.0.10","Recovery port":4712}
> {"level":"error","ts":1568905712.3255754,"logger":"controller_wildflyserver","msg":"Failures during scaling down recovery processing","Request.Namespace":"msimka-namespace","Request.Name":"tx-server","Desired replica size":0,"Number of pods to be removed":1,"error":"Found 1 errors:\n [[Failed to run transaction recovery scan for scaling down pod tx-server-0. Please, verify the pod log file. Error: Cannot process TCP connection to 172.17.0.10:4712, error: dial tcp 172.17.0.10:4712: connect: connection refused]],","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-l..."}
> {"level":"info","ts":1568905712.3256311,"logger":"controller_wildflyserver","msg":"Scaling down statefulset by verification if pods are clean by recovery","StatefulSet.Namespace":"msimka-namespace","StatefulSet.Name":"tx-server"}
> {"level":"info","ts":1568905712.3256419,"logger":"controller_wildflyserver","msg":"Statefulset was not fully scaled to the desired replica size 0 while StatefulSet is to be at size 1. Some pods were not cleaned by recovery. Verify status of the WildflyServer tx-server","StatefulSet.Namespace":"msimka-namespace","StatefulSet.Name":"tx-server"}
> {noformat}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (WFWIP-209) WildFlyServerStatus is not present in operator description
by Martin Choma (Jira)
[ https://issues.jboss.org/browse/WFWIP-209?page=com.atlassian.jira.plugin.... ]
Martin Choma closed WFWIP-209.
------------------------------
Resolution: Cannot Reproduce
Could not reproduce on latest image wildfly-operator:0.2.0-fe4dece.
> WildFlyServerStatus is not present in operator description
> -----------------------------------------------------------
>
> Key: WFWIP-209
> URL: https://issues.jboss.org/browse/WFWIP-209
> Project: WildFly WIP
> Issue Type: Bug
> Components: OpenShift
> Reporter: Petr Kremensky
> Assignee: Jeff Mesnil
> Priority: Blocker
> Labels: operator
>
> WFWIP-163 is back.
> Operator image build from https://github.com/wildfly/wildfly-operator/commit/73722306343d8650b7d3f9...
> Install the operator using the run-openshift.sh script (yaml files from ^ are used)
> reproduce:
> {noformat}
> $ oc apply -f quickstart-cr.yaml
> $ oc get pods
> NAME READY STATUS RESTARTS AGE
> quickstart-0 1/1 Running 0 3m39s
> quickstart-1 1/1 Running 0 3m39s
> wildfly-operator-777b5ccd87-cf2vf 1/1 Running 0 4m22s
> $ oc describe wildflyserver quickstart
> Name: quickstart
> Namespace: pkremens-namespace
> Labels: <none>
> Annotations: kubectl.kubernetes.io/last-applied-configuration:
> {"apiVersion":"wildfly.org/v1alpha1","kind":"WildFlyServer","metadata":{"annotations":{},"name":"quickstart","namespace":"pkremens-namespa...
> API Version: wildfly.org/v1alpha1
> Kind: WildFlyServer
> Metadata:
> Creation Timestamp: 2019-09-26T10:10:43Z
> Generation: 1
> Resource Version: 337905
> Self Link: /apis/wildfly.org/v1alpha1/namespaces/pkremens-namespace/wildflyservers/quickstart
> UID: e63168ce-e045-11e9-9045-52fdfc072182
> Spec:
> Application Image: quay.io/wildfly-quickstarts/wildfly-operator-quickstart:17.0
> Size: 2
> Storage:
> Volume Claim Template:
> Spec:
> Resources:
> Requests:
> Storage: 3Gi
> Events: <none>
> {noformat}
> Status is missing.
> Operator log error:
> {noformat}
> {"level":"error","ts":1569492856.288481,"logger":"wildlfyserver_resources","msg":"Failed to update status of WildFlyServer","WildFlyServer.Namespace":"pkremens-namespace","WildFlyServer.Name":"quickstart","error":"the server could not find the requested resource (put wildflyservers.wildfly.org quickstart)","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-l..."}
> {"level":"error","ts":1569492856.288641,"logger":"wildflyserver_controller","msg":"Failed to update WildFlyServer status.","Request.Namespace":"pkremens-namespace","Request.Name":"quickstart","error":"the server could not find the requested resource (put wildflyservers.wildfly.org quickstart)","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-l..."}
> {"level":"error","ts":1569492856.288895,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"wildflyserver-controller","request":"pkremens-namespace/quickstart","error":"the server could not find the requested resource (put wildflyservers.wildfly.org quickstart)","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-l..."}
> {noformat}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months