[JBoss JIRA] (JBIDE-26697) Possible memory leak during Loading projects job when using CRC
by André Dietisheim (Jira)
[ https://issues.jboss.org/browse/JBIDE-26697?page=com.atlassian.jira.plugi... ]
André Dietisheim edited comment on JBIDE-26697 at 10/21/19 10:49 AM:
---------------------------------------------------------------------
I see 3 possible approaches to fix this situation:
* Watch events only: Possible but there are 2 problems:
** We need the resource instance that's affected. In an event we only have the affected object which provides name, namespace, timestamp etc. and lacks the resource instance. We thus have to introduce an additional call to the backend to retrieve this resource.
** the change type that we provide when informing of changes: events are always "ADD". We'd thus have to guess "DELETE" and "MODIFY" change types via the event message. Usually pods that are killed are notified with a "Killing" message, new pods via "Created". Every other event then would be a "MODIFIED". This solution seems possible but unreliable.
The code for this could look like the following:
{code}
IEvent event = (IEvent) resource;
IObjectReference reference = event.getInvolvedObject();
String name = reference.getName();
String namespace = reference.getNamespace();
String kind = reference.getKind();
IResource res = conn.getResource(kind, namespace, name);
String message = event.getMessage();
ChangeType guessedChange = null;
if (message.contains("Killing")) {
guessedChange = ChangeType.DELETED;
} else if (message.contains("Created")) {
guessedChange = ChangeType.ADDED;
} else {
guessedChange = ChangeType.MODIFIED;
}
{code}
* hack the okHttp Dispatcher so that we're reusing the same thread of watcher requests: Dispatcher#exeuted(Call) calls the dispatcher for a call. The dispatcher creates a thread for the call. Call#request() returns the request where via Request#url we could verify if we're dispatching for a watcher. The con for this is that we're actually contradicting the okHttp design principles
* use a 2nd client for the watcher part: use AsyncHttpClient which does not force a 1/1 model for requests/threads: https://github.com/AsyncHttpClient/async-http-client/
was (Author: adietish):
I see 3 possible approaches to fix this situation:
* Watch events only: Possible but there are 2 problems:
** We need the resource instance that's affected. In an event we only have the affected object which provides name, namespace, timestamp etc. and lacks the resource instance. We thus have to introduce an additional call to the backend to retrieve this resource.
** the change type that we provide when informing of changes: events are always "ADD". We'd thus have to guess "DELETE" and "MODIFY" change types via the event message. Usually pods that are killed are notified with a "Killing" message, new pods via "Created". Every other event then would be a "MODIFIED". This solution seems possible but unreliable.
The code for this could look like the following:
{code}
IEvent event = (IEvent) resource;
IObjectReference reference = event.getInvolvedObject();
String name = reference.getName();
String namespace = reference.getNamespace();
String kind = reference.getKind();
IResource res = conn.getResource(kind, namespace, name);
String message = event.getMessage();
ChangeType guessedChange = null;
if (message.contains("Killing")) {
guessedChange = ChangeType.DELETED;
} else if (message.contains("Created")) {
guessedChange = ChangeType.ADDED;
} else {
guessedChange = ChangeType.MODIFIED;
}
{code}
* hack the okHttp Dispatcher so that we're reusing the same thread of watcher requests: Dispatcher#exeuted(Call) calls the dispatcher for a call. The dispatcher creates a thread for the call. Call#request() returns the request where via Request#url we could verify if we're dispatching for a watcher. The con for this is that we're actually contradicting the okHttp design principles
* use a 2nd client for the watcher part: use AsyncHttpClient which does not force a 1/1 model for requests/threads: https://github.com/AsyncHttpClient/async-http-client/
> Possible memory leak during Loading projects job when using CRC
> ---------------------------------------------------------------
>
> Key: JBIDE-26697
> URL: https://issues.jboss.org/browse/JBIDE-26697
> Project: Tools (JBoss Tools)
> Issue Type: Bug
> Components: openshift
> Affects Versions: 4.12.0.Final
> Environment: Mac 10.14.5 (18F203), Fedora 29, crstudio GA-v20190626-0604-B4591
> Reporter: Josef Kopriva
> Assignee: André Dietisheim
> Priority: Major
> Fix For: 4.13.0.Final
>
>
> Exception in error log:
> {code:java}
> eclipse.buildId=12.12.0.GA-v20190626-0604-B4591
> java.version=11.0.2
> java.vendor=Oracle Corporation
> BootLoader constants: OS=macosx, ARCH=x86_64, WS=cocoa, NL=en_GB
> Framework arguments: -product com.jboss.devstudio.core.product -product com.jboss.devstudio.core.product -keyring /Users/jkopriva/.eclipse_keyring
> Command-line arguments: -os macosx -ws cocoa -arch x86_64 -product com.jboss.devstudio.core.product -data file:/Users/jkopriva/workspace_B4591_3/ -product com.jboss.devstudio.core.product -keyring /Users/jkopriva/.eclipse_keyring
> org.eclipse.core.jobs
> Error
> Wed Jun 26 14:36:11 CEST 2019
> An internal error occurred during: "Loading projects...".
> java.lang.OutOfMemoryError: Java heap space
> at org.jboss.dmr.JSONParser.yyParse(JSONParser.java:877)
> at org.jboss.dmr.ModelNode.fromJSONString(ModelNode.java:1472)
> at com.openshift.internal.restclient.ResourceFactory.create(ResourceFactory.java:100)
> at com.openshift.internal.restclient.ResourceFactory.createInstanceFrom(ResourceFactory.java:149)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:304)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:275)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:264)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:171)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:160)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:151)
> at com.openshift.internal.restclient.capability.resources.ProjectTemplateListCapability.getCommonTemplates(ProjectTemplateListCapability.java:53)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems$1.visit(ApplicationSourceTreeItems.java:121)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems$1.visit(ApplicationSourceTreeItems.java:1)
> at com.openshift.internal.restclient.model.KubernetesResource.accept(KubernetesResource.java:94)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems.loadTemplates(ApplicationSourceTreeItems.java:113)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems.createChildren(ApplicationSourceTreeItems.java:61)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.loadChildren(ObservableTreeItem.java:69)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.load(ObservableTreeItem.java:56)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.load(ObservableTreeItem.java:59)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.NewApplicationWizardModel.loadResources(NewApplicationWizardModel.java:247)
> at org.jboss.tools.openshift.internal.ui.wizard.common.AbstractProjectPage$3.doRun(AbstractProjectPage.java:255)
> at org.jboss.tools.openshift.internal.common.core.job.AbstractDelegatingMonitorJob.run(AbstractDelegatingMonitorJob.java:37)
> at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years
[JBoss JIRA] (JBIDE-26697) Possible memory leak during Loading projects job when using CRC
by André Dietisheim (Jira)
[ https://issues.jboss.org/browse/JBIDE-26697?page=com.atlassian.jira.plugi... ]
André Dietisheim edited comment on JBIDE-26697 at 10/21/19 10:22 AM:
---------------------------------------------------------------------
I see 3 possible approaches to fix this situation:
* Watch events only: Possible but there are 2 problems:
** We need the resource instance that's affected. In an event we only have the affected object which provides name, namespace, timestamp etc. and lacks the resource instance. We thus have to introduce an additional call to the backend to retrieve this resource.
** the change type that we provide when informing of changes: events are always "ADD". We'd thus have to guess "DELETE" and "MODIFY" change types via the event message. Usually pods that are killed are notified with a "Killing" message, new pods via "Created". Every other event then would be a "MODIFIED". This solution seems possible but unreliable.
The code for this could look like the following:
{code}
IEvent event = (IEvent) resource;
IObjectReference reference = event.getInvolvedObject();
String name = reference.getName();
String namespace = reference.getNamespace();
String kind = reference.getKind();
IResource res = conn.getResource(kind, namespace, name);
String message = event.getMessage();
ChangeType guessedChange = null;
if (message.contains("Killing")) {
guessedChange = ChangeType.DELETED;
} else if (message.contains("Created")) {
guessedChange = ChangeType.ADDED;
} else {
guessedChange = ChangeType.MODIFIED;
}
{code}
* hack the okHttp Dispatcher so that we're reusing the same thread of watcher requests: Dispatcher#exeuted(Call) calls the dispatcher for a call. The dispatcher creates a thread for the call. Call#request() returns the request where via Request#url we could verify if we're dispatching for a watcher. The con for this is that we're actually contradicting the okHttp design principles
* use a 2nd client for the watcher part: use AsyncHttpClient which does not force a 1/1 model for requests/threads: https://github.com/AsyncHttpClient/async-http-client/
was (Author: adietish):
I see 3 possible approaches to fix this situation:
* Watch events only: Possible but there are 2 problems:
** We need the resource instance that's affected. In an event we only have the affected object which provides name, namespace, timestamp etc. and lacks the resource instance. We thus have to introduce an additional call to the backend to retrieve this resource. The code for this could look like the following:
{code}
IEvent event = (IEvent) resource;
IObjectReference reference = event.getInvolvedObject();
String name = reference.getName();
String namespace = reference.getNamespace();
String kind = reference.getKind();
IResource res = conn.getResource(kind, namespace, name);
String message = event.getMessage();
ChangeType guessedChange = null;
if (message.contains("Killing")) {
guessedChange = ChangeType.DELETED;
} else if (message.contains("Created")) {
guessedChange = ChangeType.ADDED;
} else {
guessedChange = ChangeType.MODIFIED;
}
{code}
** the change type that we provide when informing of changes: events are always "ADD". We'd thus have to guess "DELETE" and "MODIFY" change types via the event message. Usually pods that are killed are notified with a "Killing" message, new pods via "Created". Every other event then would be a "MODIFIED". This solution seems possible but unreliable.
* hack the okHttp Dispatcher so that we're reusing the same thread of watcher requests: Dispatcher#exeuted(Call) calls the dispatcher for a call. The dispatcher creates a thread for the call. Call#request() returns the request where via Request#url we could verify if we're dispatching for a watcher. The con for this is that we're actually contradicting the okHttp design principles
* use a 2nd client for the watcher part: use AsyncHttpClient which does not force a 1/1 model for requests/threads: https://github.com/AsyncHttpClient/async-http-client/
> Possible memory leak during Loading projects job when using CRC
> ---------------------------------------------------------------
>
> Key: JBIDE-26697
> URL: https://issues.jboss.org/browse/JBIDE-26697
> Project: Tools (JBoss Tools)
> Issue Type: Bug
> Components: openshift
> Affects Versions: 4.12.0.Final
> Environment: Mac 10.14.5 (18F203), Fedora 29, crstudio GA-v20190626-0604-B4591
> Reporter: Josef Kopriva
> Assignee: André Dietisheim
> Priority: Major
> Fix For: 4.13.0.Final
>
>
> Exception in error log:
> {code:java}
> eclipse.buildId=12.12.0.GA-v20190626-0604-B4591
> java.version=11.0.2
> java.vendor=Oracle Corporation
> BootLoader constants: OS=macosx, ARCH=x86_64, WS=cocoa, NL=en_GB
> Framework arguments: -product com.jboss.devstudio.core.product -product com.jboss.devstudio.core.product -keyring /Users/jkopriva/.eclipse_keyring
> Command-line arguments: -os macosx -ws cocoa -arch x86_64 -product com.jboss.devstudio.core.product -data file:/Users/jkopriva/workspace_B4591_3/ -product com.jboss.devstudio.core.product -keyring /Users/jkopriva/.eclipse_keyring
> org.eclipse.core.jobs
> Error
> Wed Jun 26 14:36:11 CEST 2019
> An internal error occurred during: "Loading projects...".
> java.lang.OutOfMemoryError: Java heap space
> at org.jboss.dmr.JSONParser.yyParse(JSONParser.java:877)
> at org.jboss.dmr.ModelNode.fromJSONString(ModelNode.java:1472)
> at com.openshift.internal.restclient.ResourceFactory.create(ResourceFactory.java:100)
> at com.openshift.internal.restclient.ResourceFactory.createInstanceFrom(ResourceFactory.java:149)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:304)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:275)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:264)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:171)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:160)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:151)
> at com.openshift.internal.restclient.capability.resources.ProjectTemplateListCapability.getCommonTemplates(ProjectTemplateListCapability.java:53)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems$1.visit(ApplicationSourceTreeItems.java:121)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems$1.visit(ApplicationSourceTreeItems.java:1)
> at com.openshift.internal.restclient.model.KubernetesResource.accept(KubernetesResource.java:94)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems.loadTemplates(ApplicationSourceTreeItems.java:113)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems.createChildren(ApplicationSourceTreeItems.java:61)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.loadChildren(ObservableTreeItem.java:69)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.load(ObservableTreeItem.java:56)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.load(ObservableTreeItem.java:59)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.NewApplicationWizardModel.loadResources(NewApplicationWizardModel.java:247)
> at org.jboss.tools.openshift.internal.ui.wizard.common.AbstractProjectPage$3.doRun(AbstractProjectPage.java:255)
> at org.jboss.tools.openshift.internal.common.core.job.AbstractDelegatingMonitorJob.run(AbstractDelegatingMonitorJob.java:37)
> at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years
[JBoss JIRA] (JBIDE-26697) Possible memory leak during Loading projects job when using CRC
by André Dietisheim (Jira)
[ https://issues.jboss.org/browse/JBIDE-26697?page=com.atlassian.jira.plugi... ]
André Dietisheim edited comment on JBIDE-26697 at 10/21/19 10:21 AM:
---------------------------------------------------------------------
I see 3 possible approaches to fix this situation:
* Watch events only: Possible but there are 2 problems:
** We need the resource instance that's affected. In an event we only have the affected object which provides name, namespace, timestamp etc. and lacks the resource instance. We thus have to introduce an additional call to the backend to retrieve this resource. The code for this could look like the following:
{code}
IEvent event = (IEvent) resource;
IObjectReference reference = event.getInvolvedObject();
String name = reference.getName();
String namespace = reference.getNamespace();
String kind = reference.getKind();
IResource res = conn.getResource(kind, namespace, name);
String message = event.getMessage();
ChangeType guessedChange = null;
if (message.contains("Killing")) {
guessedChange = ChangeType.DELETED;
} else if (message.contains("Created")) {
guessedChange = ChangeType.ADDED;
} else {
guessedChange = ChangeType.MODIFIED;
}
{code}
** the change type that we provide when informing of changes: events are always "ADD". We'd thus have to guess "DELETE" and "MODIFY" change types via the event message. Usually pods that are killed are notified with a "Killing" message, new pods via "Created". Every other event then would be a "MODIFIED". This solution seems possible but unreliable.
* hack the okHttp Dispatcher so that we're reusing the same thread of watcher requests: Dispatcher#exeuted(Call) calls the dispatcher for a call. The dispatcher creates a thread for the call. Call#request() returns the request where via Request#url we could verify if we're dispatching for a watcher. The con for this is that we're actually contradicting the okHttp design principles
* use a 2nd client for the watcher part: use AsyncHttpClient which does not force a 1/1 model for requests/threads: https://github.com/AsyncHttpClient/async-http-client/
was (Author: adietish):
I see 3 possible approaches to fix this situation:
* Watch events only: Possible but there are 2 problems:
** We need the resource instance that's affected. In an event we only have the affected object which provides name, namespace, timestamp etc. and lacks the resource instance. We thus have to introduce an additional call to the backend to retrieve this resource
** the change type that we provide when informing of changes: events are always "ADD". We'd thus have to guess "DELETE" and "MODIFY" change types via the event message. Usually pods that are killed are notified with a "Killing" message, new pods via "Created". Every other event then would be a "MODIFIED". This solution seems possible but unreliable.
* hack the okHttp Dispatcher so that we're reusing the same thread of watcher requests: Dispatcher#exeuted(Call) calls the dispatcher for a call. The dispatcher creates a thread for the call. Call#request() returns the request where via Request#url we could verify if we're dispatching for a watcher. The con for this is that we're actually contradicting the okHttp design principles
* use a 2nd client for the watcher part: use AsyncHttpClient which does not force a 1/1 model for requests/threads: https://github.com/AsyncHttpClient/async-http-client/
> Possible memory leak during Loading projects job when using CRC
> ---------------------------------------------------------------
>
> Key: JBIDE-26697
> URL: https://issues.jboss.org/browse/JBIDE-26697
> Project: Tools (JBoss Tools)
> Issue Type: Bug
> Components: openshift
> Affects Versions: 4.12.0.Final
> Environment: Mac 10.14.5 (18F203), Fedora 29, crstudio GA-v20190626-0604-B4591
> Reporter: Josef Kopriva
> Assignee: André Dietisheim
> Priority: Major
> Fix For: 4.13.0.Final
>
>
> Exception in error log:
> {code:java}
> eclipse.buildId=12.12.0.GA-v20190626-0604-B4591
> java.version=11.0.2
> java.vendor=Oracle Corporation
> BootLoader constants: OS=macosx, ARCH=x86_64, WS=cocoa, NL=en_GB
> Framework arguments: -product com.jboss.devstudio.core.product -product com.jboss.devstudio.core.product -keyring /Users/jkopriva/.eclipse_keyring
> Command-line arguments: -os macosx -ws cocoa -arch x86_64 -product com.jboss.devstudio.core.product -data file:/Users/jkopriva/workspace_B4591_3/ -product com.jboss.devstudio.core.product -keyring /Users/jkopriva/.eclipse_keyring
> org.eclipse.core.jobs
> Error
> Wed Jun 26 14:36:11 CEST 2019
> An internal error occurred during: "Loading projects...".
> java.lang.OutOfMemoryError: Java heap space
> at org.jboss.dmr.JSONParser.yyParse(JSONParser.java:877)
> at org.jboss.dmr.ModelNode.fromJSONString(ModelNode.java:1472)
> at com.openshift.internal.restclient.ResourceFactory.create(ResourceFactory.java:100)
> at com.openshift.internal.restclient.ResourceFactory.createInstanceFrom(ResourceFactory.java:149)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:304)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:275)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:264)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:171)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:160)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:151)
> at com.openshift.internal.restclient.capability.resources.ProjectTemplateListCapability.getCommonTemplates(ProjectTemplateListCapability.java:53)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems$1.visit(ApplicationSourceTreeItems.java:121)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems$1.visit(ApplicationSourceTreeItems.java:1)
> at com.openshift.internal.restclient.model.KubernetesResource.accept(KubernetesResource.java:94)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems.loadTemplates(ApplicationSourceTreeItems.java:113)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems.createChildren(ApplicationSourceTreeItems.java:61)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.loadChildren(ObservableTreeItem.java:69)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.load(ObservableTreeItem.java:56)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.load(ObservableTreeItem.java:59)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.NewApplicationWizardModel.loadResources(NewApplicationWizardModel.java:247)
> at org.jboss.tools.openshift.internal.ui.wizard.common.AbstractProjectPage$3.doRun(AbstractProjectPage.java:255)
> at org.jboss.tools.openshift.internal.common.core.job.AbstractDelegatingMonitorJob.run(AbstractDelegatingMonitorJob.java:37)
> at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years
[JBoss JIRA] (JBIDE-26697) Possible memory leak during Loading projects job when using CRC
by André Dietisheim (Jira)
[ https://issues.jboss.org/browse/JBIDE-26697?page=com.atlassian.jira.plugi... ]
André Dietisheim edited comment on JBIDE-26697 at 10/21/19 10:07 AM:
---------------------------------------------------------------------
Then there's a minimal improvmenent that we can provide: We dont watch for Events, Builds, PVCs and Templates.
Events and Builds are not editable, we thus dont need to watch them in Eclipse. PVCs and Templates are editable but our properties are empty for it.
was (Author: adietish):
Then there's a minimal improvmenent that we can provide: We dont watch for Events, PVCs and Templates.
Events are not editable, we thus dont need to watch them in Eclipse. PVCs and Templates are editable but our properties are empty for it.
> Possible memory leak during Loading projects job when using CRC
> ---------------------------------------------------------------
>
> Key: JBIDE-26697
> URL: https://issues.jboss.org/browse/JBIDE-26697
> Project: Tools (JBoss Tools)
> Issue Type: Bug
> Components: openshift
> Affects Versions: 4.12.0.Final
> Environment: Mac 10.14.5 (18F203), Fedora 29, crstudio GA-v20190626-0604-B4591
> Reporter: Josef Kopriva
> Assignee: André Dietisheim
> Priority: Major
> Fix For: 4.13.0.Final
>
>
> Exception in error log:
> {code:java}
> eclipse.buildId=12.12.0.GA-v20190626-0604-B4591
> java.version=11.0.2
> java.vendor=Oracle Corporation
> BootLoader constants: OS=macosx, ARCH=x86_64, WS=cocoa, NL=en_GB
> Framework arguments: -product com.jboss.devstudio.core.product -product com.jboss.devstudio.core.product -keyring /Users/jkopriva/.eclipse_keyring
> Command-line arguments: -os macosx -ws cocoa -arch x86_64 -product com.jboss.devstudio.core.product -data file:/Users/jkopriva/workspace_B4591_3/ -product com.jboss.devstudio.core.product -keyring /Users/jkopriva/.eclipse_keyring
> org.eclipse.core.jobs
> Error
> Wed Jun 26 14:36:11 CEST 2019
> An internal error occurred during: "Loading projects...".
> java.lang.OutOfMemoryError: Java heap space
> at org.jboss.dmr.JSONParser.yyParse(JSONParser.java:877)
> at org.jboss.dmr.ModelNode.fromJSONString(ModelNode.java:1472)
> at com.openshift.internal.restclient.ResourceFactory.create(ResourceFactory.java:100)
> at com.openshift.internal.restclient.ResourceFactory.createInstanceFrom(ResourceFactory.java:149)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:304)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:275)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:264)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:171)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:160)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:151)
> at com.openshift.internal.restclient.capability.resources.ProjectTemplateListCapability.getCommonTemplates(ProjectTemplateListCapability.java:53)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems$1.visit(ApplicationSourceTreeItems.java:121)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems$1.visit(ApplicationSourceTreeItems.java:1)
> at com.openshift.internal.restclient.model.KubernetesResource.accept(KubernetesResource.java:94)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems.loadTemplates(ApplicationSourceTreeItems.java:113)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems.createChildren(ApplicationSourceTreeItems.java:61)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.loadChildren(ObservableTreeItem.java:69)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.load(ObservableTreeItem.java:56)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.load(ObservableTreeItem.java:59)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.NewApplicationWizardModel.loadResources(NewApplicationWizardModel.java:247)
> at org.jboss.tools.openshift.internal.ui.wizard.common.AbstractProjectPage$3.doRun(AbstractProjectPage.java:255)
> at org.jboss.tools.openshift.internal.common.core.job.AbstractDelegatingMonitorJob.run(AbstractDelegatingMonitorJob.java:37)
> at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years
[JBoss JIRA] (JBIDE-26697) Possible memory leak during Loading projects job when using CRC
by André Dietisheim (Jira)
[ https://issues.jboss.org/browse/JBIDE-26697?page=com.atlassian.jira.plugi... ]
André Dietisheim commented on JBIDE-26697:
------------------------------------------
Then there's a minimal improvmenent that we can provide: We dont watch for Events, PVCs and Templates.
Events are not editable, we thus dont need to watch them in Eclipse. PVCs and Templates are editable but our properties are empty for it.
> Possible memory leak during Loading projects job when using CRC
> ---------------------------------------------------------------
>
> Key: JBIDE-26697
> URL: https://issues.jboss.org/browse/JBIDE-26697
> Project: Tools (JBoss Tools)
> Issue Type: Bug
> Components: openshift
> Affects Versions: 4.12.0.Final
> Environment: Mac 10.14.5 (18F203), Fedora 29, crstudio GA-v20190626-0604-B4591
> Reporter: Josef Kopriva
> Assignee: André Dietisheim
> Priority: Major
> Fix For: 4.13.0.Final
>
>
> Exception in error log:
> {code:java}
> eclipse.buildId=12.12.0.GA-v20190626-0604-B4591
> java.version=11.0.2
> java.vendor=Oracle Corporation
> BootLoader constants: OS=macosx, ARCH=x86_64, WS=cocoa, NL=en_GB
> Framework arguments: -product com.jboss.devstudio.core.product -product com.jboss.devstudio.core.product -keyring /Users/jkopriva/.eclipse_keyring
> Command-line arguments: -os macosx -ws cocoa -arch x86_64 -product com.jboss.devstudio.core.product -data file:/Users/jkopriva/workspace_B4591_3/ -product com.jboss.devstudio.core.product -keyring /Users/jkopriva/.eclipse_keyring
> org.eclipse.core.jobs
> Error
> Wed Jun 26 14:36:11 CEST 2019
> An internal error occurred during: "Loading projects...".
> java.lang.OutOfMemoryError: Java heap space
> at org.jboss.dmr.JSONParser.yyParse(JSONParser.java:877)
> at org.jboss.dmr.ModelNode.fromJSONString(ModelNode.java:1472)
> at com.openshift.internal.restclient.ResourceFactory.create(ResourceFactory.java:100)
> at com.openshift.internal.restclient.ResourceFactory.createInstanceFrom(ResourceFactory.java:149)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:304)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:275)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:264)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:171)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:160)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:151)
> at com.openshift.internal.restclient.capability.resources.ProjectTemplateListCapability.getCommonTemplates(ProjectTemplateListCapability.java:53)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems$1.visit(ApplicationSourceTreeItems.java:121)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems$1.visit(ApplicationSourceTreeItems.java:1)
> at com.openshift.internal.restclient.model.KubernetesResource.accept(KubernetesResource.java:94)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems.loadTemplates(ApplicationSourceTreeItems.java:113)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems.createChildren(ApplicationSourceTreeItems.java:61)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.loadChildren(ObservableTreeItem.java:69)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.load(ObservableTreeItem.java:56)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.load(ObservableTreeItem.java:59)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.NewApplicationWizardModel.loadResources(NewApplicationWizardModel.java:247)
> at org.jboss.tools.openshift.internal.ui.wizard.common.AbstractProjectPage$3.doRun(AbstractProjectPage.java:255)
> at org.jboss.tools.openshift.internal.common.core.job.AbstractDelegatingMonitorJob.run(AbstractDelegatingMonitorJob.java:37)
> at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years
[JBoss JIRA] (JBIDE-26697) Possible memory leak during Loading projects job when using CRC
by André Dietisheim (Jira)
[ https://issues.jboss.org/browse/JBIDE-26697?page=com.atlassian.jira.plugi... ]
André Dietisheim commented on JBIDE-26697:
------------------------------------------
I see 3 possible approaches to fix this situation:
* Watch events only: Possible but there are 2 problems:
** We need the resource instance that's affected. In an event we only have the affected object which provides name, namespace, timestamp etc. and lacks the resource instance. We thus have to introduce an additional call to the backend to retrieve this resource
** the change type that we provide when informing of changes: events are always "ADD". We'd thus have to guess "DELETE" and "MODIFY" change types via the event message. Usually pods that are killed are notified with a "Killing" message, new pods via "Created". Every other event then would be a "MODIFIED". This solution seems possible but unreliable.
* hack the okHttp Dispatcher so that we're reusing the same thread of watcher requests: Dispatcher#exeuted(Call) calls the dispatcher for a call. The dispatcher creates a thread for the call. Call#request() returns the request where via Request#url we could verify if we're dispatching for a watcher. The con for this is that we're actually contradicting the okHttp design principles
* use a 2nd client for the watcher part: use AsyncHttpClient which does not force a 1/1 model for requests/threads: https://github.com/AsyncHttpClient/async-http-client/
> Possible memory leak during Loading projects job when using CRC
> ---------------------------------------------------------------
>
> Key: JBIDE-26697
> URL: https://issues.jboss.org/browse/JBIDE-26697
> Project: Tools (JBoss Tools)
> Issue Type: Bug
> Components: openshift
> Affects Versions: 4.12.0.Final
> Environment: Mac 10.14.5 (18F203), Fedora 29, crstudio GA-v20190626-0604-B4591
> Reporter: Josef Kopriva
> Assignee: André Dietisheim
> Priority: Major
> Fix For: 4.13.0.Final
>
>
> Exception in error log:
> {code:java}
> eclipse.buildId=12.12.0.GA-v20190626-0604-B4591
> java.version=11.0.2
> java.vendor=Oracle Corporation
> BootLoader constants: OS=macosx, ARCH=x86_64, WS=cocoa, NL=en_GB
> Framework arguments: -product com.jboss.devstudio.core.product -product com.jboss.devstudio.core.product -keyring /Users/jkopriva/.eclipse_keyring
> Command-line arguments: -os macosx -ws cocoa -arch x86_64 -product com.jboss.devstudio.core.product -data file:/Users/jkopriva/workspace_B4591_3/ -product com.jboss.devstudio.core.product -keyring /Users/jkopriva/.eclipse_keyring
> org.eclipse.core.jobs
> Error
> Wed Jun 26 14:36:11 CEST 2019
> An internal error occurred during: "Loading projects...".
> java.lang.OutOfMemoryError: Java heap space
> at org.jboss.dmr.JSONParser.yyParse(JSONParser.java:877)
> at org.jboss.dmr.ModelNode.fromJSONString(ModelNode.java:1472)
> at com.openshift.internal.restclient.ResourceFactory.create(ResourceFactory.java:100)
> at com.openshift.internal.restclient.ResourceFactory.createInstanceFrom(ResourceFactory.java:149)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:304)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:275)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:264)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:171)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:160)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:151)
> at com.openshift.internal.restclient.capability.resources.ProjectTemplateListCapability.getCommonTemplates(ProjectTemplateListCapability.java:53)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems$1.visit(ApplicationSourceTreeItems.java:121)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems$1.visit(ApplicationSourceTreeItems.java:1)
> at com.openshift.internal.restclient.model.KubernetesResource.accept(KubernetesResource.java:94)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems.loadTemplates(ApplicationSourceTreeItems.java:113)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems.createChildren(ApplicationSourceTreeItems.java:61)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.loadChildren(ObservableTreeItem.java:69)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.load(ObservableTreeItem.java:56)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.load(ObservableTreeItem.java:59)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.NewApplicationWizardModel.loadResources(NewApplicationWizardModel.java:247)
> at org.jboss.tools.openshift.internal.ui.wizard.common.AbstractProjectPage$3.doRun(AbstractProjectPage.java:255)
> at org.jboss.tools.openshift.internal.common.core.job.AbstractDelegatingMonitorJob.run(AbstractDelegatingMonitorJob.java:37)
> at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years
[JBoss JIRA] (JBIDE-26697) Possible memory leak during Loading projects job when using CRC
by André Dietisheim (Jira)
[ https://issues.jboss.org/browse/JBIDE-26697?page=com.atlassian.jira.plugi... ]
André Dietisheim edited comment on JBIDE-26697 at 10/21/19 9:51 AM:
--------------------------------------------------------------------
Watching is done by adding the *watch=true* parameter in a request to a resource (type). We thus have to request each resource type separately (there's no bulk watching endpoint). In consequence okHttp will open up 12 threads to listen on each one of those web sockets. This is in accordance to their basic design principle where they try not to have any blocking whatsoever:
https://square.github.io/okhttp/concurrency/
{quote}
So we have a dedicated thread for every socket that just reads frames and dispatches them.
{quote}
was (Author: adietish):
The root cause here is that we're watching for changes for 12 resource types on different url. In consequence okHttp will open up connections and threads for each one of them. This results from the basic design principle of okHttp where they try not to have any blocking whatsoever:
https://square.github.io/okhttp/concurrency/
{quote}
So we have a dedicated thread for every socket that just reads frames and dispatches them.
{quote}
> Possible memory leak during Loading projects job when using CRC
> ---------------------------------------------------------------
>
> Key: JBIDE-26697
> URL: https://issues.jboss.org/browse/JBIDE-26697
> Project: Tools (JBoss Tools)
> Issue Type: Bug
> Components: openshift
> Affects Versions: 4.12.0.Final
> Environment: Mac 10.14.5 (18F203), Fedora 29, crstudio GA-v20190626-0604-B4591
> Reporter: Josef Kopriva
> Assignee: André Dietisheim
> Priority: Major
> Fix For: 4.13.0.Final
>
>
> Exception in error log:
> {code:java}
> eclipse.buildId=12.12.0.GA-v20190626-0604-B4591
> java.version=11.0.2
> java.vendor=Oracle Corporation
> BootLoader constants: OS=macosx, ARCH=x86_64, WS=cocoa, NL=en_GB
> Framework arguments: -product com.jboss.devstudio.core.product -product com.jboss.devstudio.core.product -keyring /Users/jkopriva/.eclipse_keyring
> Command-line arguments: -os macosx -ws cocoa -arch x86_64 -product com.jboss.devstudio.core.product -data file:/Users/jkopriva/workspace_B4591_3/ -product com.jboss.devstudio.core.product -keyring /Users/jkopriva/.eclipse_keyring
> org.eclipse.core.jobs
> Error
> Wed Jun 26 14:36:11 CEST 2019
> An internal error occurred during: "Loading projects...".
> java.lang.OutOfMemoryError: Java heap space
> at org.jboss.dmr.JSONParser.yyParse(JSONParser.java:877)
> at org.jboss.dmr.ModelNode.fromJSONString(ModelNode.java:1472)
> at com.openshift.internal.restclient.ResourceFactory.create(ResourceFactory.java:100)
> at com.openshift.internal.restclient.ResourceFactory.createInstanceFrom(ResourceFactory.java:149)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:304)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:275)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:264)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:171)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:160)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:151)
> at com.openshift.internal.restclient.capability.resources.ProjectTemplateListCapability.getCommonTemplates(ProjectTemplateListCapability.java:53)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems$1.visit(ApplicationSourceTreeItems.java:121)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems$1.visit(ApplicationSourceTreeItems.java:1)
> at com.openshift.internal.restclient.model.KubernetesResource.accept(KubernetesResource.java:94)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems.loadTemplates(ApplicationSourceTreeItems.java:113)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems.createChildren(ApplicationSourceTreeItems.java:61)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.loadChildren(ObservableTreeItem.java:69)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.load(ObservableTreeItem.java:56)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.load(ObservableTreeItem.java:59)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.NewApplicationWizardModel.loadResources(NewApplicationWizardModel.java:247)
> at org.jboss.tools.openshift.internal.ui.wizard.common.AbstractProjectPage$3.doRun(AbstractProjectPage.java:255)
> at org.jboss.tools.openshift.internal.common.core.job.AbstractDelegatingMonitorJob.run(AbstractDelegatingMonitorJob.java:37)
> at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years
[JBoss JIRA] (JBIDE-26697) Possible memory leak during Loading projects job when using CRC
by André Dietisheim (Jira)
[ https://issues.jboss.org/browse/JBIDE-26697?page=com.atlassian.jira.plugi... ]
André Dietisheim commented on JBIDE-26697:
------------------------------------------
The root cause here is that we're watching for changes for 12 resource types on different url. In consequence okHttp will open up connections and threads for each one of them. This results from the basic design principle of okHttp where they try not to have any blocking whatsoever:
https://square.github.io/okhttp/concurrency/
{quote}
So we have a dedicated thread for every socket that just reads frames and dispatches them.
{quote}
> Possible memory leak during Loading projects job when using CRC
> ---------------------------------------------------------------
>
> Key: JBIDE-26697
> URL: https://issues.jboss.org/browse/JBIDE-26697
> Project: Tools (JBoss Tools)
> Issue Type: Bug
> Components: openshift
> Affects Versions: 4.12.0.Final
> Environment: Mac 10.14.5 (18F203), Fedora 29, crstudio GA-v20190626-0604-B4591
> Reporter: Josef Kopriva
> Assignee: André Dietisheim
> Priority: Major
> Fix For: 4.13.0.Final
>
>
> Exception in error log:
> {code:java}
> eclipse.buildId=12.12.0.GA-v20190626-0604-B4591
> java.version=11.0.2
> java.vendor=Oracle Corporation
> BootLoader constants: OS=macosx, ARCH=x86_64, WS=cocoa, NL=en_GB
> Framework arguments: -product com.jboss.devstudio.core.product -product com.jboss.devstudio.core.product -keyring /Users/jkopriva/.eclipse_keyring
> Command-line arguments: -os macosx -ws cocoa -arch x86_64 -product com.jboss.devstudio.core.product -data file:/Users/jkopriva/workspace_B4591_3/ -product com.jboss.devstudio.core.product -keyring /Users/jkopriva/.eclipse_keyring
> org.eclipse.core.jobs
> Error
> Wed Jun 26 14:36:11 CEST 2019
> An internal error occurred during: "Loading projects...".
> java.lang.OutOfMemoryError: Java heap space
> at org.jboss.dmr.JSONParser.yyParse(JSONParser.java:877)
> at org.jboss.dmr.ModelNode.fromJSONString(ModelNode.java:1472)
> at com.openshift.internal.restclient.ResourceFactory.create(ResourceFactory.java:100)
> at com.openshift.internal.restclient.ResourceFactory.createInstanceFrom(ResourceFactory.java:149)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:304)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:275)
> at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:264)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:171)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:160)
> at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:151)
> at com.openshift.internal.restclient.capability.resources.ProjectTemplateListCapability.getCommonTemplates(ProjectTemplateListCapability.java:53)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems$1.visit(ApplicationSourceTreeItems.java:121)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems$1.visit(ApplicationSourceTreeItems.java:1)
> at com.openshift.internal.restclient.model.KubernetesResource.accept(KubernetesResource.java:94)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems.loadTemplates(ApplicationSourceTreeItems.java:113)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems.createChildren(ApplicationSourceTreeItems.java:61)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.loadChildren(ObservableTreeItem.java:69)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.load(ObservableTreeItem.java:56)
> at org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.load(ObservableTreeItem.java:59)
> at org.jboss.tools.openshift.internal.ui.wizard.newapp.NewApplicationWizardModel.loadResources(NewApplicationWizardModel.java:247)
> at org.jboss.tools.openshift.internal.ui.wizard.common.AbstractProjectPage$3.doRun(AbstractProjectPage.java:255)
> at org.jboss.tools.openshift.internal.common.core.job.AbstractDelegatingMonitorJob.run(AbstractDelegatingMonitorJob.java:37)
> at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years
[JBoss JIRA] (JBIDE-26902) Connection wizard: opening conn without token cannot be finished (have to wait for period)
by André Dietisheim (Jira)
[ https://issues.jboss.org/browse/JBIDE-26902?page=com.atlassian.jira.plugi... ]
André Dietisheim commented on JBIDE-26902:
------------------------------------------
oc v4 binary has a flag so that it does *NOT* connect to the server:
{code}
oc version --client
{code}
Oc v3 unfortunately does not have it.
> Connection wizard: opening conn without token cannot be finished (have to wait for period)
> ------------------------------------------------------------------------------------------
>
> Key: JBIDE-26902
> URL: https://issues.jboss.org/browse/JBIDE-26902
> Project: Tools (JBoss Tools)
> Issue Type: Bug
> Components: openshift
> Affects Versions: 4.13.0.AM1
> Reporter: André Dietisheim
> Priority: Major
> Labels: connection_wizard
> Fix For: 4.14.x
>
> Attachments: screenshot-1.png
>
>
> Steps:
> # ASSERT: have a connection to an OS3 server *WITHOUT* token/pw saved
> # ASSERT: make sure that you have current server for oc NOT to be reachable (ex. current server is a cdk/crc that is *NOT* running)
> # ASSERT: have oc binary set as oc binary in preferences
> # EXEC: OpenShift explorer: restart Eclipse and once back in Eclipse, open up the connection OS3 server
> # ASSERT: connection wizard pops up so that you can retrieve a token/provide pw
> # EXEC: retrieve token / provide pw
> # EXEC: try to "Finish"
> Result:
> You cannot hit "Finish", it's disabled:
> !screenshot-1.png!
> If you look closer you discover that the title area shows that it's stuck trying to verify the oc version.
> If you wait a bit "Finish" gets enabled. "Finish" gets enabled as soon as the oc binary timed out trying to connect to the current server.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years
[JBoss JIRA] (JBIDE-26902) Connection wizard: opening conn without token cannot be finished (have to wait for period)
by André Dietisheim (Jira)
[ https://issues.jboss.org/browse/JBIDE-26902?page=com.atlassian.jira.plugi... ]
André Dietisheim updated JBIDE-26902:
-------------------------------------
Workaround Description: open up "Advanced >>" section & check "Override oc location:"
Workaround: Workaround Exists
> Connection wizard: opening conn without token cannot be finished (have to wait for period)
> ------------------------------------------------------------------------------------------
>
> Key: JBIDE-26902
> URL: https://issues.jboss.org/browse/JBIDE-26902
> Project: Tools (JBoss Tools)
> Issue Type: Bug
> Components: openshift
> Affects Versions: 4.13.0.AM1
> Reporter: André Dietisheim
> Priority: Major
> Labels: connection_wizard
> Fix For: 4.14.x
>
> Attachments: screenshot-1.png
>
>
> Steps:
> # ASSERT: have a connection to an OS3 server *WITHOUT* token/pw saved
> # ASSERT: make sure that you have current server for oc NOT to be reachable (ex. current server is a cdk/crc that is *NOT* running)
> # ASSERT: have oc binary set as oc binary in preferences
> # EXEC: OpenShift explorer: restart Eclipse and once back in Eclipse, open up the connection OS3 server
> # ASSERT: connection wizard pops up so that you can retrieve a token/provide pw
> # EXEC: retrieve token / provide pw
> # EXEC: try to "Finish"
> Result:
> You cannot hit "Finish", it's disabled:
> !screenshot-1.png!
> If you look closer you discover that the title area shows that it's stuck trying to verify the oc version.
> If you wait a bit "Finish" gets enabled. "Finish" gets enabled as soon as the oc binary timed out trying to connect to the current server.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years