[JBoss JIRA] (JBDS-4151) Failed to start eclipse 10.2-0.20161101.1258 installed from RPM
by Nick Boldt (JIRA)
[ https://issues.jboss.org/browse/JBDS-4151?page=com.atlassian.jira.plugin.... ]
Nick Boldt commented on JBDS-4151:
----------------------------------
According to [~rgrunber] the -clean is probably needed to deal with leftover metadata in ~/.eclipse
> Failed to start eclipse 10.2-0.20161101.1258 installed from RPM
> ---------------------------------------------------------------
>
> Key: JBDS-4151
> URL: https://issues.jboss.org/browse/JBDS-4151
> Project: Red Hat JBoss Developer Studio (devstudio)
> Issue Type: Bug
> Components: rpm
> Affects Versions: 10.2.0.AM3
> Environment: RHEL7 64bit
> Reporter: Lukáš Valach
> Assignee: Nick Boldt
> Priority: Blocker
> Fix For: 10.2.0.AM3
>
> Attachments: eclipse.log, failed_to_start_eclipse_10.2-0.20161101.1258.png, rh-eclipse46-devstudio-snapshots-10_2.repo, rh-eclipse46.repo
>
>
> I installed rh-eclipse46-devstudio-10.2-0.20161101.1258.el7.x86_64.rpm, then I started eclipse and choose workspace location (new folder). Then I got this error window !failed_to_start_eclipse_10.2-0.20161101.1258.png|thumbnail!
> See also the log [^eclipse.log] .
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (JBIDE-23030) Scaling a service changes the wrong deployment
by Andre Dietisheim (JIRA)
[ https://issues.jboss.org/browse/JBIDE-23030?page=com.atlassian.jira.plugi... ]
Andre Dietisheim commented on JBIDE-23030:
------------------------------------------
I have the erroneous scaling info coming up even with my patch in place. Trying to fix this and update my PR.
> Scaling a service changes the wrong deployment
> ----------------------------------------------
>
> Key: JBIDE-23030
> URL: https://issues.jboss.org/browse/JBIDE-23030
> Project: Tools (JBoss Tools)
> Issue Type: Bug
> Components: openshift
> Affects Versions: 4.4.1.AM3
> Reporter: Fred Bricon
> Assignee: Jeff Cantrill
> Priority: Critical
> Fix For: 4.4.2.AM3
>
> Attachments: rc-1-replicas-0.png, rc2-replicas-1.png, replicas-0.png, scale-to-shows-old-rc.ogv
>
>
> In openshift explorer, when scaling a service that has 2 deployments, it deploys pod from the oldest deployment instead of the latest. Eventually, these pods are killed by openshift.
> The workaround is to open the properties view and issue a scale command on a deployment directly.
> However, because scaling is such a front-and-center feature from the explorer, I believe it's critical we fix it ASAP
> (copied from JBIDE-23412)
> The following steps can be seen in the following screencast:
> [^scale-to-shows-old-rc.ogv]
> # ASSERT: have an app running (ex. nodejs-example)
> # ASSERT: In OpenShift Explorer: make sure it has at least 1 pod: pick "Scale To" in the context menu of the service. Dialog shows that there's at least 1 pod currently
> # ASSERT: in OpenShift Explorer: there 1 pod shown as child to the service.
> # ASSERT: in Properties view, pick "Deployments" tab and see that there's at least 1 deployment (aka replication controller)
> # EXEC: in OpenShift explorer: select Service pick "Deploy Latest"
> # ASSERT: in Properties view: "Deployments" now shows 2 Deployments
> # ASSERT: in OpenShift Explorer you now see 2 children/pods
> # EXEC: in OpenShift Explorer: pick "Scale To..." in the context menu of the service
> Result:
> The current number of pods is shown as 0.
> !replicas-0.png!
> But it's very sure that this is not true. Behind the scenes a new replication controller was created which deployed a new pod:
> !rc2-replicas-1.png!
> The old replication controller was turned to have 0 pods. The "Scale To" dialog shows the number of replcas for the old replication controller.
> !rc-1-replicas-0.png!
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (JBTIS-975) SY - Avoid waiting for the job 'Synchronizing EMF resources'
by Andrej Podhradsky (JIRA)
[ https://issues.jboss.org/browse/JBTIS-975?page=com.atlassian.jira.plugin.... ]
Andrej Podhradsky updated JBTIS-975:
------------------------------------
Summary: SY - Avoid waiting for the job 'Synchronizing EMF resources' (was: SY - Avoid waiting for the 'Synchronizing EMF resources')
> SY - Avoid waiting for the job 'Synchronizing EMF resources'
> ------------------------------------------------------------
>
> Key: JBTIS-975
> URL: https://issues.jboss.org/browse/JBTIS-975
> Project: JBoss Tools Integration Stack
> Issue Type: Task
> Components: QE, switchyard
> Affects Versions: 4.4.0.Alpha1
> Reporter: Andrej Podhradsky
> Assignee: Andrej Podhradsky
> Fix For: 4.4.0.Final
>
>
> Sometimes It happens that some jobs take more than one minute and that's why my tests fail
> {code}
> org.jboss.reddeer.common.exception.WaitTimeoutExpiredException: Timeout after: 60 s.: The following jobs are still running
> Refreshing SwitchYard project configuration.
> Synchronizing EMF resources
> Updating Maven Project
> {code}
> Usually the test 'SwitchYardIntegrationDroolsTest' fails due to the above exception.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (JBTIS-975) SY - Avoid waiting for the 'Synchronizing EMF resources'
by Andrej Podhradsky (JIRA)
[ https://issues.jboss.org/browse/JBTIS-975?page=com.atlassian.jira.plugin.... ]
Andrej Podhradsky commented on JBTIS-975:
-----------------------------------------
If there are some jobs still running and you try to promote a service then the job 'Synchronizing EMF resources' is still running while the dialog is open. Note that you can press 'Cancel' or 'Finish' without any problem and then the job stops.
> SY - Avoid waiting for the 'Synchronizing EMF resources'
> --------------------------------------------------------
>
> Key: JBTIS-975
> URL: https://issues.jboss.org/browse/JBTIS-975
> Project: JBoss Tools Integration Stack
> Issue Type: Task
> Components: QE, switchyard
> Affects Versions: 4.4.0.Alpha1
> Reporter: Andrej Podhradsky
> Assignee: Andrej Podhradsky
> Fix For: 4.4.0.Final
>
>
> Sometimes It happens that some jobs take more than one minute and that's why my tests fail
> {code}
> org.jboss.reddeer.common.exception.WaitTimeoutExpiredException: Timeout after: 60 s.: The following jobs are still running
> Refreshing SwitchYard project configuration.
> Synchronizing EMF resources
> Updating Maven Project
> {code}
> Usually the test 'SwitchYardIntegrationDroolsTest' fails due to the above exception.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (JBTIS-975) SY - Avoid waiting for the 'Synchronizing EMF resources'
by Andrej Podhradsky (JIRA)
Andrej Podhradsky created JBTIS-975:
---------------------------------------
Summary: SY - Avoid waiting for the 'Synchronizing EMF resources'
Key: JBTIS-975
URL: https://issues.jboss.org/browse/JBTIS-975
Project: JBoss Tools Integration Stack
Issue Type: Task
Components: QE, switchyard
Affects Versions: 4.4.0.Alpha1
Reporter: Andrej Podhradsky
Assignee: Andrej Podhradsky
Fix For: 4.4.0.Final
Sometimes It happens that some jobs take more than one minute and that's why my tests fail
{code}
org.jboss.reddeer.common.exception.WaitTimeoutExpiredException: Timeout after: 60 s.: The following jobs are still running
Refreshing SwitchYard project configuration.
Synchronizing EMF resources
Updating Maven Project
{code}
Usually the test 'SwitchYardIntegrationDroolsTest' fails due to the above exception.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (JBIDE-23030) Scaling a service changes the wrong deployment
by Andre Dietisheim (JIRA)
[ https://issues.jboss.org/browse/JBIDE-23030?page=com.atlassian.jira.plugi... ]
Andre Dietisheim updated JBIDE-23030:
-------------------------------------
Attachment: replicas-0.png
> Scaling a service changes the wrong deployment
> ----------------------------------------------
>
> Key: JBIDE-23030
> URL: https://issues.jboss.org/browse/JBIDE-23030
> Project: Tools (JBoss Tools)
> Issue Type: Bug
> Components: openshift
> Affects Versions: 4.4.1.AM3
> Reporter: Fred Bricon
> Assignee: Jeff Cantrill
> Priority: Critical
> Fix For: 4.4.2.AM3
>
> Attachments: rc-1-replicas-0.png, rc2-replicas-1.png, replicas-0.png, scale-to-shows-old-rc.ogv
>
>
> In openshift explorer, when scaling a service that has 2 deployments, it deploys pod from the oldest deployment instead of the latest. Eventually, these pods are killed by openshift.
> The workaround is to open the properties view and issue a scale command on a deployment directly.
> However, because scaling is such a front-and-center feature from the explorer, I believe it's critical we fix it ASAP
> (copied from JBIDE-23412)
> The following steps can be seen in the following screencast:
> [^scale-to-shows-old-rc.ogv]
> # ASSERT: have an app running (ex. nodejs-example)
> # ASSERT: In OpenShift Explorer: make sure it has at least 1 pod: pick "Scale To" in the context menu of the service. Dialog shows that there's at least 1 pod currently
> # ASSERT: in OpenShift Explorer: there 1 pod shown as child to the service.
> # ASSERT: in Properties view, pick "Deployments" tab and see that there's at least 1 deployment (aka replication controller)
> # EXEC: in OpenShift explorer: select Service pick "Deploy Latest"
> # ASSERT: in Properties view: "Deployments" now shows 2 Deployments
> # ASSERT: in OpenShift Explorer you now see 2 children/pods
> # EXEC: in OpenShift Explorer: pick "Scale To..." in the context menu of the service
> Result:
> The current number of pods is shown as 0.
> !replicas-0.png!
> But it's very sure that this is not true. Behind the scenes a new replication controller was created which deployed a new pod:
> !rc2-replicas-1.png!
> The old replication controller was turned to have 0 pods. The "Scale To" dialog shows the number of replcas for the old replication controller.
> !rc-1-replicas-0.png!
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (JBIDE-23030) Scaling a service changes the wrong deployment
by Andre Dietisheim (JIRA)
[ https://issues.jboss.org/browse/JBIDE-23030?page=com.atlassian.jira.plugi... ]
Andre Dietisheim updated JBIDE-23030:
-------------------------------------
Attachment: rc-1-replicas-0.png
> Scaling a service changes the wrong deployment
> ----------------------------------------------
>
> Key: JBIDE-23030
> URL: https://issues.jboss.org/browse/JBIDE-23030
> Project: Tools (JBoss Tools)
> Issue Type: Bug
> Components: openshift
> Affects Versions: 4.4.1.AM3
> Reporter: Fred Bricon
> Assignee: Jeff Cantrill
> Priority: Critical
> Fix For: 4.4.2.AM3
>
> Attachments: rc-1-replicas-0.png, rc2-replicas-1.png, replicas-0.png, scale-to-shows-old-rc.ogv
>
>
> In openshift explorer, when scaling a service that has 2 deployments, it deploys pod from the oldest deployment instead of the latest. Eventually, these pods are killed by openshift.
> The workaround is to open the properties view and issue a scale command on a deployment directly.
> However, because scaling is such a front-and-center feature from the explorer, I believe it's critical we fix it ASAP
> (copied from JBIDE-23412)
> The following steps can be seen in the following screencast:
> [^scale-to-shows-old-rc.ogv]
> # ASSERT: have an app running (ex. nodejs-example)
> # ASSERT: In OpenShift Explorer: make sure it has at least 1 pod: pick "Scale To" in the context menu of the service. Dialog shows that there's at least 1 pod currently
> # ASSERT: in OpenShift Explorer: there 1 pod shown as child to the service.
> # ASSERT: in Properties view, pick "Deployments" tab and see that there's at least 1 deployment (aka replication controller)
> # EXEC: in OpenShift explorer: select Service pick "Deploy Latest"
> # ASSERT: in Properties view: "Deployments" now shows 2 Deployments
> # ASSERT: in OpenShift Explorer you now see 2 children/pods
> # EXEC: in OpenShift Explorer: pick "Scale To..." in the context menu of the service
> Result:
> The current number of pods is shown as 0.
> !replicas-0.png!
> But it's very sure that this is not true. Behind the scenes a new replication controller was created which deployed a new pod:
> !rc2-replicas-1.png!
> The old replication controller was turned to have 0 pods. The "Scale To" dialog shows the number of replcas for the old replication controller.
> !rc-1-replicas-0.png!
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (JBIDE-23030) Scaling a service changes the wrong deployment
by Andre Dietisheim (JIRA)
[ https://issues.jboss.org/browse/JBIDE-23030?page=com.atlassian.jira.plugi... ]
Andre Dietisheim updated JBIDE-23030:
-------------------------------------
Attachment: rc2-replicas-1.png
> Scaling a service changes the wrong deployment
> ----------------------------------------------
>
> Key: JBIDE-23030
> URL: https://issues.jboss.org/browse/JBIDE-23030
> Project: Tools (JBoss Tools)
> Issue Type: Bug
> Components: openshift
> Affects Versions: 4.4.1.AM3
> Reporter: Fred Bricon
> Assignee: Jeff Cantrill
> Priority: Critical
> Fix For: 4.4.2.AM3
>
> Attachments: rc-1-replicas-0.png, rc2-replicas-1.png, replicas-0.png, scale-to-shows-old-rc.ogv
>
>
> In openshift explorer, when scaling a service that has 2 deployments, it deploys pod from the oldest deployment instead of the latest. Eventually, these pods are killed by openshift.
> The workaround is to open the properties view and issue a scale command on a deployment directly.
> However, because scaling is such a front-and-center feature from the explorer, I believe it's critical we fix it ASAP
> (copied from JBIDE-23412)
> The following steps can be seen in the following screencast:
> [^scale-to-shows-old-rc.ogv]
> # ASSERT: have an app running (ex. nodejs-example)
> # ASSERT: In OpenShift Explorer: make sure it has at least 1 pod: pick "Scale To" in the context menu of the service. Dialog shows that there's at least 1 pod currently
> # ASSERT: in OpenShift Explorer: there 1 pod shown as child to the service.
> # ASSERT: in Properties view, pick "Deployments" tab and see that there's at least 1 deployment (aka replication controller)
> # EXEC: in OpenShift explorer: select Service pick "Deploy Latest"
> # ASSERT: in Properties view: "Deployments" now shows 2 Deployments
> # ASSERT: in OpenShift Explorer you now see 2 children/pods
> # EXEC: in OpenShift Explorer: pick "Scale To..." in the context menu of the service
> Result:
> The current number of pods is shown as 0.
> !replicas-0.png!
> But it's very sure that this is not true. Behind the scenes a new replication controller was created which deployed a new pod:
> !rc2-replicas-1.png!
> The old replication controller was turned to have 0 pods. The "Scale To" dialog shows the number of replcas for the old replication controller.
> !rc-1-replicas-0.png!
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months