[JBoss JIRA] (JBIDE-25442) Add possibility to be notified when user logs in into OSIO
by Jeff MAURY (JIRA)
[ https://issues.jboss.org/browse/JBIDE-25442?page=com.atlassian.jira.plugi... ]
Jeff MAURY closed JBIDE-25442.
------------------------------
Resolution: Done
> Add possibility to be notified when user logs in into OSIO
> ----------------------------------------------------------
>
> Key: JBIDE-25442
> URL: https://issues.jboss.org/browse/JBIDE-25442
> Project: Tools (JBoss Tools)
> Issue Type: Feature Request
> Components: openshift
> Affects Versions: 4.5.2.AM1
> Reporter: Lucia Jelinkova
> Assignee: Jeff MAURY
> Labels: OSIO
> Fix For: 4.5.2.Final
>
>
> I have the following scenario I'd like to implement with OSIO login. I am not sure if it is currently possible or it would need some additional implementation.
> The Fabric8 analytics plugin for Eclipse uses LSP server that needs OSIO token to work. The current implementation asks user for the token on the first open of pom.xml file and then every time the LSP server needs to be started (meaning on every open of any pom.xml file). This approach brings some disadvantages - e.g. user might cancel the login window and then he has manually re-enable the server on Fabric8 preference page. Not to mention that the login window out of the nowhere is quite disturbing.
> I would like to change it so that the user logs in into OSIO (via toolbutton or preference page) and then the LSP server is notified about the login and starts working. This would prevent the "login popup" problem and I think it would bring better user experience.
> So, is there a way to get notifications about login/logout into OSIO at the moment? Would you agree to implement this feature into OSIO Eclipse plugin?
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 5 months
[JBoss JIRA] (JBIDE-25611) Server adapter: timeouts when starting into "Debug" on OpenShift Online
by Andre Dietisheim (JIRA)
[ https://issues.jboss.org/browse/JBIDE-25611?page=com.atlassian.jira.plugi... ]
Andre Dietisheim edited comment on JBIDE-25611 at 1/25/18 7:04 PM:
-------------------------------------------------------------------
I get even more confused than I was before. What above failed all day long now seems to magically work. I can deploy and debug an app that I created via a nodejs:builder image.
The case where I create an app via template "nodejs-mongo-persitent" now fails with a different error:
!image-2018-01-26-00-49-39-691.png!
It now doesn't get the docker image labels that indicates where to deploy to within the pod. As far as I remember the deployment path is only available once the image stream is imported. Online seems very low on resources (far less than all other variants), so this could be some timing issue.
was (Author: adietish):
I get even more confused than I was before. What above failed all day long now seems to magically work. I can deploy and debug an app that I created via a nodejs:builder image.
The case where I create an app via template "nodejs-mongo-persitent" now fails with a different error:
!image-2018-01-26-00-49-39-691.png!
It now doesn't get the docker image labels that indicates where to deploy to within the pod.
> Server adapter: timeouts when starting into "Debug" on OpenShift Online
> -----------------------------------------------------------------------
>
> Key: JBIDE-25611
> URL: https://issues.jboss.org/browse/JBIDE-25611
> Project: Tools (JBoss Tools)
> Issue Type: Bug
> Components: openshift
> Affects Versions: 4.5.2.Final
> Reporter: Andre Dietisheim
> Assignee: Andre Dietisheim
> Fix For: 4.5.3.AM1
>
> Attachments: error-waiting-for-new-pod.png, image-2018-01-24-18-58-29-159.png, image-2018-01-25-10-49-09-369.png, image-2018-01-26-00-49-39-691.png, pod-ports-cdk.png, pod-ports-online.png, port-missing.png
>
>
> # ASSERT: have an account on OpenShift Online
> # EXEC: create a new application via template nodejs-mongo-persistent
> # EXEC: have the project imported into the workspace and the server adapter for it created
> # ASSERT: you have a new server adapter for your app in state [stopped]
> # EXEC: start the adapter into "Debug"
> Result:
> The adapter takes a lot of time to start but wont succeed eventually. You get the following error dialog:
> !error-waiting-for-new-pod.png!
> In the Eclipse log you'll find the following:
> {code}
> org.eclipse.core.runtime.CoreException: Failed to detect new deployed Pod for nodejs-mongo-persistent
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.waitFor(OpenShiftDebugMode.java:403)
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.waitForNewPod(OpenShiftDebugMode.java:395)
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.getPod(OpenShiftDebugMode.java:226)
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.execute(OpenShiftDebugMode.java:170)
> at org.jboss.tools.openshift.core.server.behavior.OpenShiftLaunchController.launch(OpenShiftLaunchController.java:100)
> at org.jboss.ide.eclipse.as.wtp.core.server.launch.ControllableServerLaunchConfiguration.launch(ControllableServerLaunchConfiguration.java:52)
> at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:885)
> at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:739)
> at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:731)
> at org.eclipse.wst.server.core.internal.Server.startImpl2(Server.java:3566)
> at org.eclipse.wst.server.core.internal.Server.startImpl(Server.java:3502)
> at org.eclipse.wst.server.core.internal.Server$StartJob.run(Server.java:377)
> at org.eclipse.core.internal.jobs.Worker.run(Worker.java:56)
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 5 months
[JBoss JIRA] (JBIDE-25611) Server adapter: timeouts when starting into "Debug" on OpenShift Online
by Andre Dietisheim (JIRA)
[ https://issues.jboss.org/browse/JBIDE-25611?page=com.atlassian.jira.plugi... ]
Andre Dietisheim edited comment on JBIDE-25611 at 1/25/18 6:50 PM:
-------------------------------------------------------------------
I get even more confused than I was before. What above failed all day long now seems to magically work. I can deploy and debug an app that I created via a nodejs:builder image.
The case where I create an app via template "nodejs-mongo-persitent" now fails with a different error:
!image-2018-01-26-00-49-39-691.png!
It now doesn't get the docker image labels that indicates where to deploy to within the pod.
was (Author: adietish):
I get even more confused than I was before. What above failed all day long now seems to magically work. I can deploy and debug an app that I created via a nodejs:builder image.
> Server adapter: timeouts when starting into "Debug" on OpenShift Online
> -----------------------------------------------------------------------
>
> Key: JBIDE-25611
> URL: https://issues.jboss.org/browse/JBIDE-25611
> Project: Tools (JBoss Tools)
> Issue Type: Bug
> Components: openshift
> Affects Versions: 4.5.2.Final
> Reporter: Andre Dietisheim
> Assignee: Andre Dietisheim
> Fix For: 4.5.3.AM1
>
> Attachments: error-waiting-for-new-pod.png, image-2018-01-24-18-58-29-159.png, image-2018-01-25-10-49-09-369.png, image-2018-01-26-00-49-39-691.png, pod-ports-cdk.png, pod-ports-online.png, port-missing.png
>
>
> # ASSERT: have an account on OpenShift Online
> # EXEC: create a new application via template nodejs-mongo-persistent
> # EXEC: have the project imported into the workspace and the server adapter for it created
> # ASSERT: you have a new server adapter for your app in state [stopped]
> # EXEC: start the adapter into "Debug"
> Result:
> The adapter takes a lot of time to start but wont succeed eventually. You get the following error dialog:
> !error-waiting-for-new-pod.png!
> In the Eclipse log you'll find the following:
> {code}
> org.eclipse.core.runtime.CoreException: Failed to detect new deployed Pod for nodejs-mongo-persistent
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.waitFor(OpenShiftDebugMode.java:403)
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.waitForNewPod(OpenShiftDebugMode.java:395)
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.getPod(OpenShiftDebugMode.java:226)
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.execute(OpenShiftDebugMode.java:170)
> at org.jboss.tools.openshift.core.server.behavior.OpenShiftLaunchController.launch(OpenShiftLaunchController.java:100)
> at org.jboss.ide.eclipse.as.wtp.core.server.launch.ControllableServerLaunchConfiguration.launch(ControllableServerLaunchConfiguration.java:52)
> at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:885)
> at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:739)
> at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:731)
> at org.eclipse.wst.server.core.internal.Server.startImpl2(Server.java:3566)
> at org.eclipse.wst.server.core.internal.Server.startImpl(Server.java:3502)
> at org.eclipse.wst.server.core.internal.Server$StartJob.run(Server.java:377)
> at org.eclipse.core.internal.jobs.Worker.run(Worker.java:56)
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 5 months
[JBoss JIRA] (JBIDE-25611) Server adapter: timeouts when starting into "Debug" on OpenShift Online
by Andre Dietisheim (JIRA)
[ https://issues.jboss.org/browse/JBIDE-25611?page=com.atlassian.jira.plugi... ]
Andre Dietisheim commented on JBIDE-25611:
------------------------------------------
I get even more confused than I was before. What above failed all day long now seems to magically work. I can deploy and debug an app that I created via a nodejs:builder image.
> Server adapter: timeouts when starting into "Debug" on OpenShift Online
> -----------------------------------------------------------------------
>
> Key: JBIDE-25611
> URL: https://issues.jboss.org/browse/JBIDE-25611
> Project: Tools (JBoss Tools)
> Issue Type: Bug
> Components: openshift
> Affects Versions: 4.5.2.Final
> Reporter: Andre Dietisheim
> Assignee: Andre Dietisheim
> Fix For: 4.5.3.AM1
>
> Attachments: error-waiting-for-new-pod.png, image-2018-01-24-18-58-29-159.png, image-2018-01-25-10-49-09-369.png, pod-ports-cdk.png, pod-ports-online.png, port-missing.png
>
>
> # ASSERT: have an account on OpenShift Online
> # EXEC: create a new application via template nodejs-mongo-persistent
> # EXEC: have the project imported into the workspace and the server adapter for it created
> # ASSERT: you have a new server adapter for your app in state [stopped]
> # EXEC: start the adapter into "Debug"
> Result:
> The adapter takes a lot of time to start but wont succeed eventually. You get the following error dialog:
> !error-waiting-for-new-pod.png!
> In the Eclipse log you'll find the following:
> {code}
> org.eclipse.core.runtime.CoreException: Failed to detect new deployed Pod for nodejs-mongo-persistent
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.waitFor(OpenShiftDebugMode.java:403)
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.waitForNewPod(OpenShiftDebugMode.java:395)
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.getPod(OpenShiftDebugMode.java:226)
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.execute(OpenShiftDebugMode.java:170)
> at org.jboss.tools.openshift.core.server.behavior.OpenShiftLaunchController.launch(OpenShiftLaunchController.java:100)
> at org.jboss.ide.eclipse.as.wtp.core.server.launch.ControllableServerLaunchConfiguration.launch(ControllableServerLaunchConfiguration.java:52)
> at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:885)
> at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:739)
> at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:731)
> at org.eclipse.wst.server.core.internal.Server.startImpl2(Server.java:3566)
> at org.eclipse.wst.server.core.internal.Server.startImpl(Server.java:3502)
> at org.eclipse.wst.server.core.internal.Server$StartJob.run(Server.java:377)
> at org.eclipse.core.internal.jobs.Worker.run(Worker.java:56)
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 5 months
[JBoss JIRA] (JBIDE-25611) Server adapter: timeouts when starting into "Debug" on OpenShift Online
by Andre Dietisheim (JIRA)
[ https://issues.jboss.org/browse/JBIDE-25611?page=com.atlassian.jira.plugi... ]
Andre Dietisheim edited comment on JBIDE-25611 at 1/25/18 6:31 PM:
-------------------------------------------------------------------
When deploying a nodejs:latest builder image to OpenShift Online, the pod is not found because pods are created with an erroneous annotation *"openshift.io/deployment-config.name"*. In all other OpenShift this wont happen.
In Online I see pods being created where the annotation *"openshift.io/deployment-config.name"* carries "nodejs-mongo-persistent" whereas the deployment config is named "nodejs". Our tooling thus doesn't find the pod that it should operate (deploy, port-forward) on.
Deployment Config that the tooling creates:
{code}
{
"kind": "DeploymentConfig",
"apiVersion": "v1",
"metadata": {
"name": "nodejs",
"namespace": "adietish-stage",
"selfLink": "/oapi/v1/namespaces/adietish-stage/deploymentconfigs/nodejs",
"uid": "0a16727d-0211-11e8-aa99-0233cba325d9",
"resourceVersion": "838086675",
"generation": 3,
"creationTimestamp": "2018-01-25T20:48:04Z",
"labels": {
"deploymentconfig": "nodejs"
},
"annotations": {
"openshift.io/generated-by": "jbosstools-openshift"
}
},
"spec": {
"strategy": {
"type": "Rolling",
"rollingParams": {
"updatePeriodSeconds": 1,
"intervalSeconds": 1,
"timeoutSeconds": 600,
"maxUnavailable": "25%",
"maxSurge": "25%"
},
"resources": {},
"activeDeadlineSeconds": 21600
},
"triggers": [
{
"type": "ConfigChange"
},
{
"type": "ImageChange",
"imageChangeParams": {
"automatic": true,
"containerNames": [
"nodejs"
],
"from": {
"kind": "ImageStreamTag",
"namespace": "adietish-stage",
"name": "nodejs:latest"
},
"lastTriggeredImage": "172.30.98.11:5000/adietish-stage/nodejs@sha256:3d45703157c352fb585a708efda1bfaa38c533ea5371b16cf82ab8a09f22ebcc"
}
}
],
"replicas": 1,
"test": false,
"selector": {
"deploymentconfig": "nodejs"
},
"template": {
"metadata": {
"labels": {
"deploymentconfig": "nodejs"
}
},
"spec": {
"containers": [
{
"name": "nodejs",
"image": "172.30.98.11:5000/adietish-stage/nodejs@sha256:3d45703157c352fb585a708efda1bfaa38c533ea5371b16cf82ab8a09f22ebcc",
"ports": [
{
"name": "8080-tcp",
"containerPort": 8080,
"protocol": "TCP"
},
{
"name": "debug",
"containerPort": 5858,
"protocol": "TCP"
}
],
"env": [
{
"name": "DEBUG_PORT",
"value": "5858"
},
{
"name": "DEV_MODE",
"value": "true"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
}
},
"status": {
"latestVersion": 2,
"observedGeneration": 3,
"replicas": 0,
"updatedReplicas": 0,
"availableReplicas": 0,
"unavailableReplicas": 0,
"details": {
"message": "config change",
"causes": [
{
"type": "ConfigChange"
}
]
},
"conditions": [
{
"type": "Available",
"status": "False",
"lastUpdateTime": "2018-01-25T20:48:04Z",
"lastTransitionTime": "2018-01-25T20:48:04Z",
"message": "Deployment config does not have minimum availability."
},
{
"type": "Progressing",
"status": "False",
"lastUpdateTime": "2018-01-25T21:03:51Z",
"lastTransitionTime": "2018-01-25T21:03:51Z",
"reason": "ProgressDeadlineExceeded",
"message": "replication controller \"nodejs-2\" has failed progressing"
}
]
}
}
{code}
Replication Controller:
{code}
{
"kind" : "ReplicationController",
"apiVersion" : "v1",
"metadata" : {
"name" : "nodejs-2",
"namespace" : "adietish-stage",
"selfLink" : "/api/v1/namespaces/adietish-stage/replicationcontrollers/nodejs-2",
"uid" : "95b3d727-0225-11e8-b468-02d7377a4b17",
"resourceVersion" : "838449729",
"generation" : 2,
"creationTimestamp" : "2018-01-25T23:15:08Z",
"labels" : {
"deploymentconfig" : "nodejs",
"openshift.io/deployment-config.name" : "nodejs"
},
"annotations" : {
"openshift.io/deployer-pod.completed-at" : "2018-01-25 23:15:12 +0000 UTC",
"openshift.io/deployer-pod.created-at" : "2018-01-25 23:15:08 +0000 UTC",
"openshift.io/deployer-pod.name" : "nodejs-2-deploy",
"openshift.io/deployment-config.latest-version" : "2",
"openshift.io/deployment-config.name" : "nodejs",
"openshift.io/deployment.phase" : "Complete",
"openshift.io/deployment.replicas" : "1",
"openshift.io/deployment.status-reason" : "image change",
"openshift.io/encoded-deployment-config" : "{\"kind\":\"DeploymentConfig\",\"apiVersion\":\"v1\",\"metadata\":{\"name\":\"nodejs\",\"namespace\":\"adietish-stage\",\"selfLink\":\"/apis/apps.openshift.io/v1/namespaces/adietish-stage/deploymentconfigs/nodejs\",\"uid\":\"89254d7d-0225-11e8-af4a-02e52a0be43d\",\"resourceVersion\":\"838449428\",\"generation\":2,\"creationTimestamp\":\"2018-01-25T23:14:47Z\",\"labels\":{\"deploymentconfig\":\"nodejs\"},\"annotations\":{\"openshift.io/generated-by\":\"jbosstools-openshift\"}},\"spec\":{\"strategy\":{\"type\":\"Rolling\",\"rollingParams\":{\"updatePeriodSeconds\":1,\"intervalSeconds\":1,\"timeoutSeconds\":600,\"maxUnavailable\":\"25%\",\"maxSurge\":\"25%\"},\"resources\":{},\"activeDeadlineSeconds\":21600},\"triggers\":[{\"type\":\"ConfigChange\"},{\"type\":\"ImageChange\",\"imageChangeParams\":{\"automatic\":true,\"containerNames\":[\"nodejs\"],\"from\":{\"kind\":\"ImageStreamTag\",\"namespace\":\"adietish-stage\",\"name\":\"nodejs:latest\"},\"lastTriggeredImage\":\"172.30.98.11:5000/adietish-stage/nodejs@sha256:210293c339eda9c97624440d8e3382b62eca941d5f703555146cea096a10aacd\"}}],\"replicas\":1,\"test\":false,\"selector\":{\"deploymentconfig\":\"nodejs\"},\"template\":{\"metadata\":{\"creationTimestamp\":null,\"labels\":{\"deploymentconfig\":\"nodejs\"}},\"spec\":{\"containers\":[{\"name\":\"nodejs\",\"image\":\"172.30.98.11:5000/adietish-stage/nodejs@sha256:210293c339eda9c97624440d8e3382b62eca941d5f703555146cea096a10aacd\",\"ports\":[{\"name\":\"8080-tcp\",\"containerPort\":8080,\"protocol\":\"TCP\"}],\"resources\":{},\"terminationMessagePath\":\"/dev/termination-log\",\"terminationMessagePolicy\":\"File\",\"imagePullPolicy\":\"Always\"}],\"restartPolicy\":\"Always\",\"terminationGracePeriodSeconds\":30,\"dnsPolicy\":\"ClusterFirst\",\"securityContext\":{},\"schedulerName\":\"default-scheduler\"}}},\"status\":{\"latestVersion\":2,\"observedGeneration\":2,\"replicas\":1,\"updatedReplicas\":0,\"availableReplicas\":0,\"unavailableReplicas\":1,\"details\":{\"message\":\"image change\",\"causes\":[{\"type\":\"ImageChange\",\"imageTrigger\":{\"from\":{\"kind\":\"DockerImage\",\"name\":\"172.30.98.11:5000/adietish-stage/nodejs@sha256:210293c339eda9c97624440d8e3382b62eca941d5f703555146cea096a10aacd\"}}}]},\"conditions\":[{\"type\":\"Available\",\"status\":\"False\",\"lastUpdateTime\":\"2018-01-25T23:14:47Z\",\"lastTransitionTime\":\"2018-01-25T23:14:47Z\",\"message\":\"Deployment config does not have minimum availability.\"},{\"type\":\"Progressing\",\"status\":\"False\",\"lastUpdateTime\":\"2018-01-25T23:15:08Z\",\"lastTransitionTime\":\"2018-01-25T23:15:08Z\",\"reason\":\"ProgressDeadlineExceeded\",\"message\":\"replication controller \\\"nodejs-1\\\" has failed progressing\"}]}}\n"
},
"ownerReferences" : [{
"apiVersion" : "apps.openshift.io/v1",
"kind" : "DeploymentConfig",
"name" : "nodejs",
"uid" : "89254d7d-0225-11e8-af4a-02e52a0be43d",
"controller" : true,
"blockOwnerDeletion" : true
}]
},
"spec" : {
"replicas" : 1,
"selector" : {
"deployment" : "nodejs-2",
"deploymentconfig" : "nodejs"
},
"template" : {
"metadata" : {
"labels" : {
"deployment" : "nodejs-2",
"deploymentconfig" : "nodejs"
},
"annotations" : {
"openshift.io/deployment-config.latest-version" : "2",
"openshift.io/deployment-config.name" : "nodejs",
"openshift.io/deployment.name" : "nodejs-2"
}
},
"spec" : {
"containers" : [{
"name" : "nodejs",
"image" : "172.30.98.11:5000/adietish-stage/nodejs@sha256:210293c339eda9c97624440d8e3382b62eca941d5f703555146cea096a10aacd",
"ports" : [{
"name" : "8080-tcp",
"containerPort" : 8080,
"protocol" : "TCP"
}],
"resources" : {},
"terminationMessagePath" : "/dev/termination-log",
"terminationMessagePolicy" : "File",
"imagePullPolicy" : "Always"
}],
"restartPolicy" : "Always",
"terminationGracePeriodSeconds" : 30,
"dnsPolicy" : "ClusterFirst",
"securityContext" : {},
"schedulerName" : "default-scheduler"
}
}
},
"status" : {
"replicas" : 1,
"fullyLabeledReplicas" : 1,
"readyReplicas" : 1,
"availableReplicas" : 1,
"observedGeneration" : 2
}
}
{code}
annotations on the nodejs Pod:
{code}
{
"openshift.io/deployment-config.latest-version"="3",
"kubernetes.io/limit-ranger"="LimitRanger plugin set: cpu request for container nodejs-mongo-persistent; cpu limit for container nodejs-mongo-persistent",
"openshift.io/deployment-config.name"="nodejs-mongo-persistent",
"openshift.io/deployment.name"="nodejs-mongo-persistent-3",
"kubernetes.io/created-by"="{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"adietish-stage","name":"nodejs-mongo-persistent-3","uid":"64acf60d-003c-11e8-bed0-0233cba325d9","apiVersion":"v1","resourceVersion":"830225895"}}
",
"openshift.io/scc"="restricted",
}
{code}
was (Author: adietish):
It very much looks like to me as if the templates would differ:
On OpenShift Online, when restarting the adapter debugging into a finished build where pod annotations dont carry the correct name of the deployment config. I see pods being created where the annotation *"openshift.io/deployment-config.name"* carries "nodejs-mongo-persistent" whereas the deployment config is named "nodejs". Our tooling thus doesn't find the pod that it should operate (deploy, port-forward) on.
{code}
{
"kind": "DeploymentConfig",
"apiVersion": "v1",
"metadata": {
"name": "nodejs",
"namespace": "adietish-stage",
"selfLink": "/oapi/v1/namespaces/adietish-stage/deploymentconfigs/nodejs",
"uid": "0a16727d-0211-11e8-aa99-0233cba325d9",
"resourceVersion": "838086675",
"generation": 3,
"creationTimestamp": "2018-01-25T20:48:04Z",
"labels": {
"deploymentconfig": "nodejs"
},
"annotations": {
"openshift.io/generated-by": "jbosstools-openshift"
}
},
"spec": {
"strategy": {
"type": "Rolling",
"rollingParams": {
"updatePeriodSeconds": 1,
"intervalSeconds": 1,
"timeoutSeconds": 600,
"maxUnavailable": "25%",
"maxSurge": "25%"
},
"resources": {},
"activeDeadlineSeconds": 21600
},
"triggers": [
{
"type": "ConfigChange"
},
{
"type": "ImageChange",
"imageChangeParams": {
"automatic": true,
"containerNames": [
"nodejs"
],
"from": {
"kind": "ImageStreamTag",
"namespace": "adietish-stage",
"name": "nodejs:latest"
},
"lastTriggeredImage": "172.30.98.11:5000/adietish-stage/nodejs@sha256:3d45703157c352fb585a708efda1bfaa38c533ea5371b16cf82ab8a09f22ebcc"
}
}
],
"replicas": 1,
"test": false,
"selector": {
"deploymentconfig": "nodejs"
},
"template": {
"metadata": {
"labels": {
"deploymentconfig": "nodejs"
}
},
"spec": {
"containers": [
{
"name": "nodejs",
"image": "172.30.98.11:5000/adietish-stage/nodejs@sha256:3d45703157c352fb585a708efda1bfaa38c533ea5371b16cf82ab8a09f22ebcc",
"ports": [
{
"name": "8080-tcp",
"containerPort": 8080,
"protocol": "TCP"
},
{
"name": "debug",
"containerPort": 5858,
"protocol": "TCP"
}
],
"env": [
{
"name": "DEBUG_PORT",
"value": "5858"
},
{
"name": "DEV_MODE",
"value": "true"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
}
},
"status": {
"latestVersion": 2,
"observedGeneration": 3,
"replicas": 0,
"updatedReplicas": 0,
"availableReplicas": 0,
"unavailableReplicas": 0,
"details": {
"message": "config change",
"causes": [
{
"type": "ConfigChange"
}
]
},
"conditions": [
{
"type": "Available",
"status": "False",
"lastUpdateTime": "2018-01-25T20:48:04Z",
"lastTransitionTime": "2018-01-25T20:48:04Z",
"message": "Deployment config does not have minimum availability."
},
{
"type": "Progressing",
"status": "False",
"lastUpdateTime": "2018-01-25T21:03:51Z",
"lastTransitionTime": "2018-01-25T21:03:51Z",
"reason": "ProgressDeadlineExceeded",
"message": "replication controller \"nodejs-2\" has failed progressing"
}
]
}
}
{code}
annotations on the nodejs pod:
{code}
{
"openshift.io/deployment-config.latest-version"="3",
"kubernetes.io/limit-ranger"="LimitRanger plugin set: cpu request for container nodejs-mongo-persistent; cpu limit for container nodejs-mongo-persistent",
"openshift.io/deployment-config.name"="nodejs-mongo-persistent",
"openshift.io/deployment.name"="nodejs-mongo-persistent-3",
"kubernetes.io/created-by"="{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"adietish-stage","name":"nodejs-mongo-persistent-3","uid":"64acf60d-003c-11e8-bed0-0233cba325d9","apiVersion":"v1","resourceVersion":"830225895"}}
",
"openshift.io/scc"="restricted",
}
{code}
> Server adapter: timeouts when starting into "Debug" on OpenShift Online
> -----------------------------------------------------------------------
>
> Key: JBIDE-25611
> URL: https://issues.jboss.org/browse/JBIDE-25611
> Project: Tools (JBoss Tools)
> Issue Type: Bug
> Components: openshift
> Affects Versions: 4.5.2.Final
> Reporter: Andre Dietisheim
> Assignee: Andre Dietisheim
> Fix For: 4.5.3.AM1
>
> Attachments: error-waiting-for-new-pod.png, image-2018-01-24-18-58-29-159.png, image-2018-01-25-10-49-09-369.png, pod-ports-cdk.png, pod-ports-online.png, port-missing.png
>
>
> # ASSERT: have an account on OpenShift Online
> # EXEC: create a new application via template nodejs-mongo-persistent
> # EXEC: have the project imported into the workspace and the server adapter for it created
> # ASSERT: you have a new server adapter for your app in state [stopped]
> # EXEC: start the adapter into "Debug"
> Result:
> The adapter takes a lot of time to start but wont succeed eventually. You get the following error dialog:
> !error-waiting-for-new-pod.png!
> In the Eclipse log you'll find the following:
> {code}
> org.eclipse.core.runtime.CoreException: Failed to detect new deployed Pod for nodejs-mongo-persistent
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.waitFor(OpenShiftDebugMode.java:403)
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.waitForNewPod(OpenShiftDebugMode.java:395)
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.getPod(OpenShiftDebugMode.java:226)
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.execute(OpenShiftDebugMode.java:170)
> at org.jboss.tools.openshift.core.server.behavior.OpenShiftLaunchController.launch(OpenShiftLaunchController.java:100)
> at org.jboss.ide.eclipse.as.wtp.core.server.launch.ControllableServerLaunchConfiguration.launch(ControllableServerLaunchConfiguration.java:52)
> at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:885)
> at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:739)
> at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:731)
> at org.eclipse.wst.server.core.internal.Server.startImpl2(Server.java:3566)
> at org.eclipse.wst.server.core.internal.Server.startImpl(Server.java:3502)
> at org.eclipse.wst.server.core.internal.Server$StartJob.run(Server.java:377)
> at org.eclipse.core.internal.jobs.Worker.run(Worker.java:56)
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 5 months
[JBoss JIRA] (JBIDE-25611) Server adapter: timeouts when starting into "Debug" on OpenShift Online
by Andre Dietisheim (JIRA)
[ https://issues.jboss.org/browse/JBIDE-25611?page=com.atlassian.jira.plugi... ]
Andre Dietisheim edited comment on JBIDE-25611 at 1/25/18 5:03 PM:
-------------------------------------------------------------------
It very much looks like to me as if the templates would differ:
On OpenShift Online, when restarting the adapter debugging into a finished build where pod annotations dont carry the correct name of the deployment config. I see pods being created where the annotation *"openshift.io/deployment-config.name"* carries "nodejs-mongo-persistent" whereas the deployment config is named "nodejs". Our tooling thus doesn't find the pod that it should operate (deploy, port-forward) on.
{code}
{
"kind": "DeploymentConfig",
"apiVersion": "v1",
"metadata": {
"name": "nodejs",
"namespace": "adietish-stage",
"selfLink": "/oapi/v1/namespaces/adietish-stage/deploymentconfigs/nodejs",
"uid": "0a16727d-0211-11e8-aa99-0233cba325d9",
"resourceVersion": "838086675",
"generation": 3,
"creationTimestamp": "2018-01-25T20:48:04Z",
"labels": {
"deploymentconfig": "nodejs"
},
"annotations": {
"openshift.io/generated-by": "jbosstools-openshift"
}
},
"spec": {
"strategy": {
"type": "Rolling",
"rollingParams": {
"updatePeriodSeconds": 1,
"intervalSeconds": 1,
"timeoutSeconds": 600,
"maxUnavailable": "25%",
"maxSurge": "25%"
},
"resources": {},
"activeDeadlineSeconds": 21600
},
"triggers": [
{
"type": "ConfigChange"
},
{
"type": "ImageChange",
"imageChangeParams": {
"automatic": true,
"containerNames": [
"nodejs"
],
"from": {
"kind": "ImageStreamTag",
"namespace": "adietish-stage",
"name": "nodejs:latest"
},
"lastTriggeredImage": "172.30.98.11:5000/adietish-stage/nodejs@sha256:3d45703157c352fb585a708efda1bfaa38c533ea5371b16cf82ab8a09f22ebcc"
}
}
],
"replicas": 1,
"test": false,
"selector": {
"deploymentconfig": "nodejs"
},
"template": {
"metadata": {
"labels": {
"deploymentconfig": "nodejs"
}
},
"spec": {
"containers": [
{
"name": "nodejs",
"image": "172.30.98.11:5000/adietish-stage/nodejs@sha256:3d45703157c352fb585a708efda1bfaa38c533ea5371b16cf82ab8a09f22ebcc",
"ports": [
{
"name": "8080-tcp",
"containerPort": 8080,
"protocol": "TCP"
},
{
"name": "debug",
"containerPort": 5858,
"protocol": "TCP"
}
],
"env": [
{
"name": "DEBUG_PORT",
"value": "5858"
},
{
"name": "DEV_MODE",
"value": "true"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
}
},
"status": {
"latestVersion": 2,
"observedGeneration": 3,
"replicas": 0,
"updatedReplicas": 0,
"availableReplicas": 0,
"unavailableReplicas": 0,
"details": {
"message": "config change",
"causes": [
{
"type": "ConfigChange"
}
]
},
"conditions": [
{
"type": "Available",
"status": "False",
"lastUpdateTime": "2018-01-25T20:48:04Z",
"lastTransitionTime": "2018-01-25T20:48:04Z",
"message": "Deployment config does not have minimum availability."
},
{
"type": "Progressing",
"status": "False",
"lastUpdateTime": "2018-01-25T21:03:51Z",
"lastTransitionTime": "2018-01-25T21:03:51Z",
"reason": "ProgressDeadlineExceeded",
"message": "replication controller \"nodejs-2\" has failed progressing"
}
]
}
}
{code}
annotations on the nodejs pod:
{code}
{
"openshift.io/deployment-config.latest-version"="3",
"kubernetes.io/limit-ranger"="LimitRanger plugin set: cpu request for container nodejs-mongo-persistent; cpu limit for container nodejs-mongo-persistent",
"openshift.io/deployment-config.name"="nodejs-mongo-persistent",
"openshift.io/deployment.name"="nodejs-mongo-persistent-3",
"kubernetes.io/created-by"="{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"adietish-stage","name":"nodejs-mongo-persistent-3","uid":"64acf60d-003c-11e8-bed0-0233cba325d9","apiVersion":"v1","resourceVersion":"830225895"}}
",
"openshift.io/scc"="restricted",
}
{code}
was (Author: adietish):
It very much looks like to me as if the templates would differ:
(In Online only) I see pods being created where the annotation *"openshift.io/deployment-config.name"* carries "nodejs-mongo-persistent" whereas the deployment config is named "nodejs". Our tooling thus doesn't find the pod that it should operate (deploy, port-forward) on.
{code}
{
"kind": "DeploymentConfig",
"apiVersion": "v1",
"metadata": {
"name": "nodejs",
"namespace": "adietish-stage",
"selfLink": "/oapi/v1/namespaces/adietish-stage/deploymentconfigs/nodejs",
"uid": "0a16727d-0211-11e8-aa99-0233cba325d9",
"resourceVersion": "838086675",
"generation": 3,
"creationTimestamp": "2018-01-25T20:48:04Z",
"labels": {
"deploymentconfig": "nodejs"
},
"annotations": {
"openshift.io/generated-by": "jbosstools-openshift"
}
},
"spec": {
"strategy": {
"type": "Rolling",
"rollingParams": {
"updatePeriodSeconds": 1,
"intervalSeconds": 1,
"timeoutSeconds": 600,
"maxUnavailable": "25%",
"maxSurge": "25%"
},
"resources": {},
"activeDeadlineSeconds": 21600
},
"triggers": [
{
"type": "ConfigChange"
},
{
"type": "ImageChange",
"imageChangeParams": {
"automatic": true,
"containerNames": [
"nodejs"
],
"from": {
"kind": "ImageStreamTag",
"namespace": "adietish-stage",
"name": "nodejs:latest"
},
"lastTriggeredImage": "172.30.98.11:5000/adietish-stage/nodejs@sha256:3d45703157c352fb585a708efda1bfaa38c533ea5371b16cf82ab8a09f22ebcc"
}
}
],
"replicas": 1,
"test": false,
"selector": {
"deploymentconfig": "nodejs"
},
"template": {
"metadata": {
"labels": {
"deploymentconfig": "nodejs"
}
},
"spec": {
"containers": [
{
"name": "nodejs",
"image": "172.30.98.11:5000/adietish-stage/nodejs@sha256:3d45703157c352fb585a708efda1bfaa38c533ea5371b16cf82ab8a09f22ebcc",
"ports": [
{
"name": "8080-tcp",
"containerPort": 8080,
"protocol": "TCP"
},
{
"name": "debug",
"containerPort": 5858,
"protocol": "TCP"
}
],
"env": [
{
"name": "DEBUG_PORT",
"value": "5858"
},
{
"name": "DEV_MODE",
"value": "true"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
}
},
"status": {
"latestVersion": 2,
"observedGeneration": 3,
"replicas": 0,
"updatedReplicas": 0,
"availableReplicas": 0,
"unavailableReplicas": 0,
"details": {
"message": "config change",
"causes": [
{
"type": "ConfigChange"
}
]
},
"conditions": [
{
"type": "Available",
"status": "False",
"lastUpdateTime": "2018-01-25T20:48:04Z",
"lastTransitionTime": "2018-01-25T20:48:04Z",
"message": "Deployment config does not have minimum availability."
},
{
"type": "Progressing",
"status": "False",
"lastUpdateTime": "2018-01-25T21:03:51Z",
"lastTransitionTime": "2018-01-25T21:03:51Z",
"reason": "ProgressDeadlineExceeded",
"message": "replication controller \"nodejs-2\" has failed progressing"
}
]
}
}
{code}
annotations on the nodejs pod:
{code}
{
"openshift.io/deployment-config.latest-version"="3",
"kubernetes.io/limit-ranger"="LimitRanger plugin set: cpu request for container nodejs-mongo-persistent; cpu limit for container nodejs-mongo-persistent",
"openshift.io/deployment-config.name"="nodejs-mongo-persistent",
"openshift.io/deployment.name"="nodejs-mongo-persistent-3",
"kubernetes.io/created-by"="{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"adietish-stage","name":"nodejs-mongo-persistent-3","uid":"64acf60d-003c-11e8-bed0-0233cba325d9","apiVersion":"v1","resourceVersion":"830225895"}}
",
"openshift.io/scc"="restricted",
}
{code}
> Server adapter: timeouts when starting into "Debug" on OpenShift Online
> -----------------------------------------------------------------------
>
> Key: JBIDE-25611
> URL: https://issues.jboss.org/browse/JBIDE-25611
> Project: Tools (JBoss Tools)
> Issue Type: Bug
> Components: openshift
> Affects Versions: 4.5.2.Final
> Reporter: Andre Dietisheim
> Assignee: Andre Dietisheim
> Fix For: 4.5.3.AM1
>
> Attachments: error-waiting-for-new-pod.png, image-2018-01-24-18-58-29-159.png, image-2018-01-25-10-49-09-369.png, pod-ports-cdk.png, pod-ports-online.png, port-missing.png
>
>
> # ASSERT: have an account on OpenShift Online
> # EXEC: create a new application via template nodejs-mongo-persistent
> # EXEC: have the project imported into the workspace and the server adapter for it created
> # ASSERT: you have a new server adapter for your app in state [stopped]
> # EXEC: start the adapter into "Debug"
> Result:
> The adapter takes a lot of time to start but wont succeed eventually. You get the following error dialog:
> !error-waiting-for-new-pod.png!
> In the Eclipse log you'll find the following:
> {code}
> org.eclipse.core.runtime.CoreException: Failed to detect new deployed Pod for nodejs-mongo-persistent
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.waitFor(OpenShiftDebugMode.java:403)
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.waitForNewPod(OpenShiftDebugMode.java:395)
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.getPod(OpenShiftDebugMode.java:226)
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.execute(OpenShiftDebugMode.java:170)
> at org.jboss.tools.openshift.core.server.behavior.OpenShiftLaunchController.launch(OpenShiftLaunchController.java:100)
> at org.jboss.ide.eclipse.as.wtp.core.server.launch.ControllableServerLaunchConfiguration.launch(ControllableServerLaunchConfiguration.java:52)
> at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:885)
> at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:739)
> at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:731)
> at org.eclipse.wst.server.core.internal.Server.startImpl2(Server.java:3566)
> at org.eclipse.wst.server.core.internal.Server.startImpl(Server.java:3502)
> at org.eclipse.wst.server.core.internal.Server$StartJob.run(Server.java:377)
> at org.eclipse.core.internal.jobs.Worker.run(Worker.java:56)
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 5 months
[JBoss JIRA] (JBIDE-25611) Server adapter: timeouts when starting into "Debug" on OpenShift Online
by Andre Dietisheim (JIRA)
[ https://issues.jboss.org/browse/JBIDE-25611?page=com.atlassian.jira.plugi... ]
Andre Dietisheim commented on JBIDE-25611:
------------------------------------------
It very much looks like to me as if the templates would differ:
(In Online only) I see pods being created where the annotation *"openshift.io/deployment-config.name"* carries "nodejs-mongo-persistent" whereas the deployment config is named "nodejs". Our tooling thus doesn't find the pod that it should operate (deploy, port-forward) on.
{code}
{
"kind": "DeploymentConfig",
"apiVersion": "v1",
"metadata": {
"name": "nodejs",
"namespace": "adietish-stage",
"selfLink": "/oapi/v1/namespaces/adietish-stage/deploymentconfigs/nodejs",
"uid": "0a16727d-0211-11e8-aa99-0233cba325d9",
"resourceVersion": "838086675",
"generation": 3,
"creationTimestamp": "2018-01-25T20:48:04Z",
"labels": {
"deploymentconfig": "nodejs"
},
"annotations": {
"openshift.io/generated-by": "jbosstools-openshift"
}
},
"spec": {
"strategy": {
"type": "Rolling",
"rollingParams": {
"updatePeriodSeconds": 1,
"intervalSeconds": 1,
"timeoutSeconds": 600,
"maxUnavailable": "25%",
"maxSurge": "25%"
},
"resources": {},
"activeDeadlineSeconds": 21600
},
"triggers": [
{
"type": "ConfigChange"
},
{
"type": "ImageChange",
"imageChangeParams": {
"automatic": true,
"containerNames": [
"nodejs"
],
"from": {
"kind": "ImageStreamTag",
"namespace": "adietish-stage",
"name": "nodejs:latest"
},
"lastTriggeredImage": "172.30.98.11:5000/adietish-stage/nodejs@sha256:3d45703157c352fb585a708efda1bfaa38c533ea5371b16cf82ab8a09f22ebcc"
}
}
],
"replicas": 1,
"test": false,
"selector": {
"deploymentconfig": "nodejs"
},
"template": {
"metadata": {
"labels": {
"deploymentconfig": "nodejs"
}
},
"spec": {
"containers": [
{
"name": "nodejs",
"image": "172.30.98.11:5000/adietish-stage/nodejs@sha256:3d45703157c352fb585a708efda1bfaa38c533ea5371b16cf82ab8a09f22ebcc",
"ports": [
{
"name": "8080-tcp",
"containerPort": 8080,
"protocol": "TCP"
},
{
"name": "debug",
"containerPort": 5858,
"protocol": "TCP"
}
],
"env": [
{
"name": "DEBUG_PORT",
"value": "5858"
},
{
"name": "DEV_MODE",
"value": "true"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
}
},
"status": {
"latestVersion": 2,
"observedGeneration": 3,
"replicas": 0,
"updatedReplicas": 0,
"availableReplicas": 0,
"unavailableReplicas": 0,
"details": {
"message": "config change",
"causes": [
{
"type": "ConfigChange"
}
]
},
"conditions": [
{
"type": "Available",
"status": "False",
"lastUpdateTime": "2018-01-25T20:48:04Z",
"lastTransitionTime": "2018-01-25T20:48:04Z",
"message": "Deployment config does not have minimum availability."
},
{
"type": "Progressing",
"status": "False",
"lastUpdateTime": "2018-01-25T21:03:51Z",
"lastTransitionTime": "2018-01-25T21:03:51Z",
"reason": "ProgressDeadlineExceeded",
"message": "replication controller \"nodejs-2\" has failed progressing"
}
]
}
}
{code}
annotations on the nodejs pod:
{code}
{
"openshift.io/deployment-config.latest-version"="3",
"kubernetes.io/limit-ranger"="LimitRanger plugin set: cpu request for container nodejs-mongo-persistent; cpu limit for container nodejs-mongo-persistent",
"openshift.io/deployment-config.name"="nodejs-mongo-persistent",
"openshift.io/deployment.name"="nodejs-mongo-persistent-3",
"kubernetes.io/created-by"="{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"adietish-stage","name":"nodejs-mongo-persistent-3","uid":"64acf60d-003c-11e8-bed0-0233cba325d9","apiVersion":"v1","resourceVersion":"830225895"}}
",
"openshift.io/scc"="restricted",
}
{code}
> Server adapter: timeouts when starting into "Debug" on OpenShift Online
> -----------------------------------------------------------------------
>
> Key: JBIDE-25611
> URL: https://issues.jboss.org/browse/JBIDE-25611
> Project: Tools (JBoss Tools)
> Issue Type: Bug
> Components: openshift
> Affects Versions: 4.5.2.Final
> Reporter: Andre Dietisheim
> Assignee: Andre Dietisheim
> Fix For: 4.5.3.AM1
>
> Attachments: error-waiting-for-new-pod.png, image-2018-01-24-18-58-29-159.png, image-2018-01-25-10-49-09-369.png, pod-ports-cdk.png, pod-ports-online.png, port-missing.png
>
>
> # ASSERT: have an account on OpenShift Online
> # EXEC: create a new application via template nodejs-mongo-persistent
> # EXEC: have the project imported into the workspace and the server adapter for it created
> # ASSERT: you have a new server adapter for your app in state [stopped]
> # EXEC: start the adapter into "Debug"
> Result:
> The adapter takes a lot of time to start but wont succeed eventually. You get the following error dialog:
> !error-waiting-for-new-pod.png!
> In the Eclipse log you'll find the following:
> {code}
> org.eclipse.core.runtime.CoreException: Failed to detect new deployed Pod for nodejs-mongo-persistent
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.waitFor(OpenShiftDebugMode.java:403)
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.waitForNewPod(OpenShiftDebugMode.java:395)
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.getPod(OpenShiftDebugMode.java:226)
> at org.jboss.tools.openshift.internal.core.server.debug.OpenShiftDebugMode.execute(OpenShiftDebugMode.java:170)
> at org.jboss.tools.openshift.core.server.behavior.OpenShiftLaunchController.launch(OpenShiftLaunchController.java:100)
> at org.jboss.ide.eclipse.as.wtp.core.server.launch.ControllableServerLaunchConfiguration.launch(ControllableServerLaunchConfiguration.java:52)
> at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:885)
> at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:739)
> at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:731)
> at org.eclipse.wst.server.core.internal.Server.startImpl2(Server.java:3566)
> at org.eclipse.wst.server.core.internal.Server.startImpl(Server.java:3502)
> at org.eclipse.wst.server.core.internal.Server$StartJob.run(Server.java:377)
> at org.eclipse.core.internal.jobs.Worker.run(Worker.java:56)
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 5 months