[
https://issues.jboss.org/browse/JBIDE-26697?page=com.atlassian.jira.plugi...
]
André Dietisheim edited comment on JBIDE-26697 at 10/21/19 10:22 AM:
---------------------------------------------------------------------
I see 3 possible approaches to fix this situation:
* Watch events only: Possible but there are 2 problems:
** We need the resource instance that's affected. In an event we only have the
affected object which provides name, namespace, timestamp etc. and lacks the resource
instance. We thus have to introduce an additional call to the backend to retrieve this
resource.
** the change type that we provide when informing of changes: events are always
"ADD". We'd thus have to guess "DELETE" and "MODIFY"
change types via the event message. Usually pods that are killed are notified with a
"Killing" message, new pods via "Created". Every other event then
would be a "MODIFIED". This solution seems possible but unreliable.
The code for this could look like the following:
{code}
IEvent event = (IEvent) resource;
IObjectReference reference = event.getInvolvedObject();
String name = reference.getName();
String namespace = reference.getNamespace();
String kind = reference.getKind();
IResource res = conn.getResource(kind, namespace, name);
String message = event.getMessage();
ChangeType guessedChange = null;
if (message.contains("Killing")) {
guessedChange = ChangeType.DELETED;
} else if (message.contains("Created")) {
guessedChange = ChangeType.ADDED;
} else {
guessedChange = ChangeType.MODIFIED;
}
{code}
* hack the okHttp Dispatcher so that we're reusing the same thread of watcher
requests: Dispatcher#exeuted(Call) calls the dispatcher for a call. The dispatcher creates
a thread for the call. Call#request() returns the request where via Request#url we could
verify if we're dispatching for a watcher. The con for this is that we're actually
contradicting the okHttp design principles
* use a 2nd client for the watcher part: use AsyncHttpClient which does not force a 1/1
model for requests/threads:
https://github.com/AsyncHttpClient/async-http-client/
was (Author: adietish):
I see 3 possible approaches to fix this situation:
* Watch events only: Possible but there are 2 problems:
** We need the resource instance that's affected. In an event we only have the
affected object which provides name, namespace, timestamp etc. and lacks the resource
instance. We thus have to introduce an additional call to the backend to retrieve this
resource. The code for this could look like the following:
{code}
IEvent event = (IEvent) resource;
IObjectReference reference = event.getInvolvedObject();
String name = reference.getName();
String namespace = reference.getNamespace();
String kind = reference.getKind();
IResource res = conn.getResource(kind, namespace, name);
String message = event.getMessage();
ChangeType guessedChange = null;
if (message.contains("Killing")) {
guessedChange = ChangeType.DELETED;
} else if (message.contains("Created")) {
guessedChange = ChangeType.ADDED;
} else {
guessedChange = ChangeType.MODIFIED;
}
{code}
** the change type that we provide when informing of changes: events are always
"ADD". We'd thus have to guess "DELETE" and "MODIFY"
change types via the event message. Usually pods that are killed are notified with a
"Killing" message, new pods via "Created". Every other event then
would be a "MODIFIED". This solution seems possible but unreliable.
* hack the okHttp Dispatcher so that we're reusing the same thread of watcher
requests: Dispatcher#exeuted(Call) calls the dispatcher for a call. The dispatcher creates
a thread for the call. Call#request() returns the request where via Request#url we could
verify if we're dispatching for a watcher. The con for this is that we're actually
contradicting the okHttp design principles
* use a 2nd client for the watcher part: use AsyncHttpClient which does not force a 1/1
model for requests/threads:
https://github.com/AsyncHttpClient/async-http-client/
Possible memory leak during Loading projects job when using CRC
---------------------------------------------------------------
Key: JBIDE-26697
URL:
https://issues.jboss.org/browse/JBIDE-26697
Project: Tools (JBoss Tools)
Issue Type: Bug
Components: openshift
Affects Versions: 4.12.0.Final
Environment: Mac 10.14.5 (18F203), Fedora 29, crstudio GA-v20190626-0604-B4591
Reporter: Josef Kopriva
Assignee: André Dietisheim
Priority: Major
Fix For: 4.13.0.Final
Exception in error log:
{code:java}
eclipse.buildId=12.12.0.GA-v20190626-0604-B4591
java.version=11.0.2
java.vendor=Oracle Corporation
BootLoader constants: OS=macosx, ARCH=x86_64, WS=cocoa, NL=en_GB
Framework arguments: -product com.jboss.devstudio.core.product -product
com.jboss.devstudio.core.product -keyring /Users/jkopriva/.eclipse_keyring
Command-line arguments: -os macosx -ws cocoa -arch x86_64 -product
com.jboss.devstudio.core.product -data file:/Users/jkopriva/workspace_B4591_3/ -product
com.jboss.devstudio.core.product -keyring /Users/jkopriva/.eclipse_keyring
org.eclipse.core.jobs
Error
Wed Jun 26 14:36:11 CEST 2019
An internal error occurred during: "Loading projects...".
java.lang.OutOfMemoryError: Java heap space
at org.jboss.dmr.JSONParser.yyParse(JSONParser.java:877)
at org.jboss.dmr.ModelNode.fromJSONString(ModelNode.java:1472)
at com.openshift.internal.restclient.ResourceFactory.create(ResourceFactory.java:100)
at
com.openshift.internal.restclient.ResourceFactory.createInstanceFrom(ResourceFactory.java:149)
at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:304)
at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:275)
at com.openshift.internal.restclient.DefaultClient.execute(DefaultClient.java:264)
at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:171)
at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:160)
at com.openshift.internal.restclient.DefaultClient.list(DefaultClient.java:151)
at
com.openshift.internal.restclient.capability.resources.ProjectTemplateListCapability.getCommonTemplates(ProjectTemplateListCapability.java:53)
at
org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems$1.visit(ApplicationSourceTreeItems.java:121)
at
org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems$1.visit(ApplicationSourceTreeItems.java:1)
at
com.openshift.internal.restclient.model.KubernetesResource.accept(KubernetesResource.java:94)
at
org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems.loadTemplates(ApplicationSourceTreeItems.java:113)
at
org.jboss.tools.openshift.internal.ui.wizard.newapp.ApplicationSourceTreeItems.createChildren(ApplicationSourceTreeItems.java:61)
at
org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.loadChildren(ObservableTreeItem.java:69)
at
org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.load(ObservableTreeItem.java:56)
at
org.jboss.tools.openshift.internal.ui.treeitem.ObservableTreeItem.load(ObservableTreeItem.java:59)
at
org.jboss.tools.openshift.internal.ui.wizard.newapp.NewApplicationWizardModel.loadResources(NewApplicationWizardModel.java:247)
at
org.jboss.tools.openshift.internal.ui.wizard.common.AbstractProjectPage$3.doRun(AbstractProjectPage.java:255)
at
org.jboss.tools.openshift.internal.common.core.job.AbstractDelegatingMonitorJob.run(AbstractDelegatingMonitorJob.java:37)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
{code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)