[JBoss JIRA] (MODCLUSTER-658) Implement mod_cluster worker for Undertow embedded and standalone use case
by Michal Karm (Jira)
[ https://issues.jboss.org/browse/MODCLUSTER-658?page=com.atlassian.jira.pl... ]
Michal Karm commented on MODCLUSTER-658:
----------------------------------------
Leaving the topic.
If [~ezeaguilar] is interested; we might find someone to look into it again.
> Implement mod_cluster worker for Undertow embedded and standalone use case
> --------------------------------------------------------------------------
>
> Key: MODCLUSTER-658
> URL: https://issues.jboss.org/browse/MODCLUSTER-658
> Project: mod_cluster
> Issue Type: Feature Request
> Reporter: Michal Karm
> Priority: Minor
>
> This is a placeholder JIRA for a possible implementation/artifact. It has neither any formal planning nor priority set and it might not happen at all.
> h2. Workers and balancers
> Mod_cluster project comprises two kinds of actors: balancers and workers.
> h3. Balancers
> Currently there are three balancer implementations:
> # mod_proxy_cluster module for Apache HTTP Server
> # Undertow mod_cluster filter that could be used both in Undertow embedded and in Wildfly application server
> # mod_proxy_cluster Nginx implementation
> h3. Workers
> There are these worker implementations:
> # JBoss AS 5 / mod_cluster.sar worker implementation, {color:red}*Abandoned/Legacy*{color}
> # JBoss AS7 mod_cluster subsystem, {color:orange}*Legacy*{color}
> # Wildfly application server, {color:green}*Current*{color}
> # Tomcat 6.0/7.0/8.0, {color:orange}*Legacy*{color}
> # Tomcat 8.5/9.0, {color:green}*Current*{color}
> In addition to those, we would like to see mod_cluster worker login implemented for additional web servers, such as:
> # Jetty
> # Go net/http, i.e. Go web servers (mostly for REST APIs)
> # Undertow embedded
> # Undertow standalone (UNDERTOW-1356)
> h3. Undertow worker implementation
> There should be a mod_cluster worker library that would enable Undertow, both in its embedded and standalone (UNDERTOW-1356) flavour to act as a worker, i.e. to report to another mod_cluster balancer (either httpd, Nginx or another Undertow instance).
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 4 months
[JBoss JIRA] (MODCLUSTER-658) Implement mod_cluster worker for Undertow embedded and standalone use case
by Michal Karm (Jira)
[ https://issues.jboss.org/browse/MODCLUSTER-658?page=com.atlassian.jira.pl... ]
Michal Karm reassigned MODCLUSTER-658:
--------------------------------------
Assignee: (was: Michal Karm)
> Implement mod_cluster worker for Undertow embedded and standalone use case
> --------------------------------------------------------------------------
>
> Key: MODCLUSTER-658
> URL: https://issues.jboss.org/browse/MODCLUSTER-658
> Project: mod_cluster
> Issue Type: Feature Request
> Reporter: Michal Karm
> Priority: Minor
>
> This is a placeholder JIRA for a possible implementation/artifact. It has neither any formal planning nor priority set and it might not happen at all.
> h2. Workers and balancers
> Mod_cluster project comprises two kinds of actors: balancers and workers.
> h3. Balancers
> Currently there are three balancer implementations:
> # mod_proxy_cluster module for Apache HTTP Server
> # Undertow mod_cluster filter that could be used both in Undertow embedded and in Wildfly application server
> # mod_proxy_cluster Nginx implementation
> h3. Workers
> There are these worker implementations:
> # JBoss AS 5 / mod_cluster.sar worker implementation, {color:red}*Abandoned/Legacy*{color}
> # JBoss AS7 mod_cluster subsystem, {color:orange}*Legacy*{color}
> # Wildfly application server, {color:green}*Current*{color}
> # Tomcat 6.0/7.0/8.0, {color:orange}*Legacy*{color}
> # Tomcat 8.5/9.0, {color:green}*Current*{color}
> In addition to those, we would like to see mod_cluster worker login implemented for additional web servers, such as:
> # Jetty
> # Go net/http, i.e. Go web servers (mostly for REST APIs)
> # Undertow embedded
> # Undertow standalone (UNDERTOW-1356)
> h3. Undertow worker implementation
> There should be a mod_cluster worker library that would enable Undertow, both in its embedded and standalone (UNDERTOW-1356) flavour to act as a worker, i.e. to report to another mod_cluster balancer (either httpd, Nginx or another Undertow instance).
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 4 months
[JBoss JIRA] (MODCLUSTER-697) Segfault when DeterministicFailover On
by Michal Karm (Jira)
[ https://issues.jboss.org/browse/MODCLUSTER-697?page=com.atlassian.jira.pl... ]
Michal Karm resolved MODCLUSTER-697.
------------------------------------
Fix Version/s: 1.3.12.Final
Resolution: Done
> Segfault when DeterministicFailover On
> --------------------------------------
>
> Key: MODCLUSTER-697
> URL: https://issues.jboss.org/browse/MODCLUSTER-697
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.8.Final
> Environment: jbcs-httpd24-mod_cluster-native-1.3.8-1.Final_redhat_1.jbcs.el7.x86_64
> Reporter: Hisanobu Okuda
> Assignee: Michal Karm
> Priority: Blocker
> Fix For: 1.3.12.Final
>
>
> When DeterministicFailover On, httpd segfaults:
> {code}
> (gdb) bt
> #0 proxy_worker_cmp (a=0x7ffb18002438, b=0x7ffb18002470) at mod_proxy_cluster.c:154
> #1 0x00007ffba4c87ba3 in msort_with_tmp (p=0x7ffb551b37f0, b=0x7ffb551b3748, n=2) at msort.c:124
> #2 0x00007ffba4c87b02 in msort_with_tmp (n=2, b=0x7ffb551b3748, p=0x7ffb551b37f0) at msort.c:45
> #3 msort_with_tmp (p=0x7ffb551b37f0, b=0x7ffb551b3748, n=5) at msort.c:53
> #4 0x00007ffba4c87ef7 in msort_with_tmp (n=5, b=<optimized out>, p=0x7ffb551b37f0) at msort.c:45
> #5 __GI___qsort_r (b=b@entry=0x7ffb18002438, n=n@entry=5, s=s@entry=56, cmp=cmp@entry=0x7ffb90ae5e00 <proxy_worker_cmp>, arg=arg@entry=0x0) at msort.c:254
> #6 0x00007ffba4c88148 in __GI_qsort (b=b@entry=0x7ffb18002438, n=n@entry=5, s=s@entry=56, cmp=cmp@entry=0x7ffb90ae5e00 <proxy_worker_cmp>) at msort.c:308
> #7 0x00007ffb90ae9ae7 in internal_find_best_byrequests (conf=0x55fe55cacb88, node_table=0x7ffb180108e0, context_table=0x7ffb1800fed8,
> vhost_table=0x7ffb1802baa0, failoverdomain=0, domain=0x0, r=0x7ffb18006a10, balancer=0x55fe56065140) at mod_proxy_cluster.c:2331
> #8 find_best_worker (balancer=0x55fe56065140, conf=conf@entry=0x55fe55cacb88, r=r@entry=0x7ffb18006a10, domain=domain@entry=0x0,
> failoverdomain=failoverdomain@entry=0, vhost_table=vhost_table@entry=0x7ffb1802baa0, context_table=context_table@entry=0x7ffb1800fed8,
> node_table=node_table@entry=0x7ffb180108e0, recurse=recurse@entry=1) at mod_proxy_cluster.c:3484
> #9 0x00007ffb90aed611 in proxy_cluster_pre_request (worker=0x7ffb551b3b38, balancer=0x7ffb551b3b30, r=0x7ffb18006a10, conf=0x55fe55cacb88,
> url=0x7ffb551b3b40) at mod_proxy_cluster.c:3851
> #10 0x00007ffb9a409fa6 in proxy_run_pre_request () from /usr/jboss/jbossweb/shopapp_ws_m01/modules/mod_proxy.so
> #11 0x00007ffb9a40ec62 in ap_proxy_pre_request () from /usr/jboss/jbossweb/shopapp_ws_m01/modules/mod_proxy.so
> #12 0x00007ffb9a40a7dc in proxy_handler () from /usr/jboss/jbossweb/shopapp_ws_m01/modules/mod_proxy.so
> ...
> (gdb) disassemble
> Dump of assembler code for function proxy_worker_cmp:
> 0x00007ffb90ae5e00 <+0>: mov (%rdi),%rax
> => 0x00007ffb90ae5e03 <+3>: mov 0x18(%rax),%rdi
> (gdb) info registers rax
> rax 0x9e10fcde4a34ed64 -7056862584431645340
> (gdb) x 0x9e10fcde4a34ed64
> 0x9e10fcde4a34ed64: Cannot access memory at address 0x9e10fcde4a34ed64
> (gdb)
> {code}
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 4 months
[JBoss JIRA] (MODCLUSTER-697) Segfault when DeterministicFailover On
by Michal Karm (Jira)
[ https://issues.jboss.org/browse/MODCLUSTER-697?page=com.atlassian.jira.pl... ]
Work on MODCLUSTER-697 started by Michal Karm.
----------------------------------------------
> Segfault when DeterministicFailover On
> --------------------------------------
>
> Key: MODCLUSTER-697
> URL: https://issues.jboss.org/browse/MODCLUSTER-697
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.8.Final
> Environment: jbcs-httpd24-mod_cluster-native-1.3.8-1.Final_redhat_1.jbcs.el7.x86_64
> Reporter: Hisanobu Okuda
> Assignee: Michal Karm
> Priority: Blocker
>
> When DeterministicFailover On, httpd segfaults:
> {code}
> (gdb) bt
> #0 proxy_worker_cmp (a=0x7ffb18002438, b=0x7ffb18002470) at mod_proxy_cluster.c:154
> #1 0x00007ffba4c87ba3 in msort_with_tmp (p=0x7ffb551b37f0, b=0x7ffb551b3748, n=2) at msort.c:124
> #2 0x00007ffba4c87b02 in msort_with_tmp (n=2, b=0x7ffb551b3748, p=0x7ffb551b37f0) at msort.c:45
> #3 msort_with_tmp (p=0x7ffb551b37f0, b=0x7ffb551b3748, n=5) at msort.c:53
> #4 0x00007ffba4c87ef7 in msort_with_tmp (n=5, b=<optimized out>, p=0x7ffb551b37f0) at msort.c:45
> #5 __GI___qsort_r (b=b@entry=0x7ffb18002438, n=n@entry=5, s=s@entry=56, cmp=cmp@entry=0x7ffb90ae5e00 <proxy_worker_cmp>, arg=arg@entry=0x0) at msort.c:254
> #6 0x00007ffba4c88148 in __GI_qsort (b=b@entry=0x7ffb18002438, n=n@entry=5, s=s@entry=56, cmp=cmp@entry=0x7ffb90ae5e00 <proxy_worker_cmp>) at msort.c:308
> #7 0x00007ffb90ae9ae7 in internal_find_best_byrequests (conf=0x55fe55cacb88, node_table=0x7ffb180108e0, context_table=0x7ffb1800fed8,
> vhost_table=0x7ffb1802baa0, failoverdomain=0, domain=0x0, r=0x7ffb18006a10, balancer=0x55fe56065140) at mod_proxy_cluster.c:2331
> #8 find_best_worker (balancer=0x55fe56065140, conf=conf@entry=0x55fe55cacb88, r=r@entry=0x7ffb18006a10, domain=domain@entry=0x0,
> failoverdomain=failoverdomain@entry=0, vhost_table=vhost_table@entry=0x7ffb1802baa0, context_table=context_table@entry=0x7ffb1800fed8,
> node_table=node_table@entry=0x7ffb180108e0, recurse=recurse@entry=1) at mod_proxy_cluster.c:3484
> #9 0x00007ffb90aed611 in proxy_cluster_pre_request (worker=0x7ffb551b3b38, balancer=0x7ffb551b3b30, r=0x7ffb18006a10, conf=0x55fe55cacb88,
> url=0x7ffb551b3b40) at mod_proxy_cluster.c:3851
> #10 0x00007ffb9a409fa6 in proxy_run_pre_request () from /usr/jboss/jbossweb/shopapp_ws_m01/modules/mod_proxy.so
> #11 0x00007ffb9a40ec62 in ap_proxy_pre_request () from /usr/jboss/jbossweb/shopapp_ws_m01/modules/mod_proxy.so
> #12 0x00007ffb9a40a7dc in proxy_handler () from /usr/jboss/jbossweb/shopapp_ws_m01/modules/mod_proxy.so
> ...
> (gdb) disassemble
> Dump of assembler code for function proxy_worker_cmp:
> 0x00007ffb90ae5e00 <+0>: mov (%rdi),%rax
> => 0x00007ffb90ae5e03 <+3>: mov 0x18(%rax),%rdi
> (gdb) info registers rax
> rax 0x9e10fcde4a34ed64 -7056862584431645340
> (gdb) x 0x9e10fcde4a34ed64
> 0x9e10fcde4a34ed64: Cannot access memory at address 0x9e10fcde4a34ed64
> (gdb)
> {code}
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 4 months
[JBoss JIRA] (MODCLUSTER-697) Segfault when DeterministicFailover On
by Masafumi Miura (Jira)
[ https://issues.jboss.org/browse/MODCLUSTER-697?page=com.atlassian.jira.pl... ]
Masafumi Miura updated MODCLUSTER-697:
--------------------------------------
Git Pull Request: https://github.com/modcluster/mod_cluster/pull/399, https://github.com/modcluster/mod_proxy_cluster/commit/6d4719cf0aee6d2c8c..., https://github.com/modcluster/mod_proxy_cluster/commit/0ff923bcb4c789df8f...
> Segfault when DeterministicFailover On
> --------------------------------------
>
> Key: MODCLUSTER-697
> URL: https://issues.jboss.org/browse/MODCLUSTER-697
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.8.Final
> Environment: jbcs-httpd24-mod_cluster-native-1.3.8-1.Final_redhat_1.jbcs.el7.x86_64
> Reporter: Hisanobu Okuda
> Assignee: Michal Karm
> Priority: Blocker
>
> When DeterministicFailover On, httpd segfaults:
> {code}
> (gdb) bt
> #0 proxy_worker_cmp (a=0x7ffb18002438, b=0x7ffb18002470) at mod_proxy_cluster.c:154
> #1 0x00007ffba4c87ba3 in msort_with_tmp (p=0x7ffb551b37f0, b=0x7ffb551b3748, n=2) at msort.c:124
> #2 0x00007ffba4c87b02 in msort_with_tmp (n=2, b=0x7ffb551b3748, p=0x7ffb551b37f0) at msort.c:45
> #3 msort_with_tmp (p=0x7ffb551b37f0, b=0x7ffb551b3748, n=5) at msort.c:53
> #4 0x00007ffba4c87ef7 in msort_with_tmp (n=5, b=<optimized out>, p=0x7ffb551b37f0) at msort.c:45
> #5 __GI___qsort_r (b=b@entry=0x7ffb18002438, n=n@entry=5, s=s@entry=56, cmp=cmp@entry=0x7ffb90ae5e00 <proxy_worker_cmp>, arg=arg@entry=0x0) at msort.c:254
> #6 0x00007ffba4c88148 in __GI_qsort (b=b@entry=0x7ffb18002438, n=n@entry=5, s=s@entry=56, cmp=cmp@entry=0x7ffb90ae5e00 <proxy_worker_cmp>) at msort.c:308
> #7 0x00007ffb90ae9ae7 in internal_find_best_byrequests (conf=0x55fe55cacb88, node_table=0x7ffb180108e0, context_table=0x7ffb1800fed8,
> vhost_table=0x7ffb1802baa0, failoverdomain=0, domain=0x0, r=0x7ffb18006a10, balancer=0x55fe56065140) at mod_proxy_cluster.c:2331
> #8 find_best_worker (balancer=0x55fe56065140, conf=conf@entry=0x55fe55cacb88, r=r@entry=0x7ffb18006a10, domain=domain@entry=0x0,
> failoverdomain=failoverdomain@entry=0, vhost_table=vhost_table@entry=0x7ffb1802baa0, context_table=context_table@entry=0x7ffb1800fed8,
> node_table=node_table@entry=0x7ffb180108e0, recurse=recurse@entry=1) at mod_proxy_cluster.c:3484
> #9 0x00007ffb90aed611 in proxy_cluster_pre_request (worker=0x7ffb551b3b38, balancer=0x7ffb551b3b30, r=0x7ffb18006a10, conf=0x55fe55cacb88,
> url=0x7ffb551b3b40) at mod_proxy_cluster.c:3851
> #10 0x00007ffb9a409fa6 in proxy_run_pre_request () from /usr/jboss/jbossweb/shopapp_ws_m01/modules/mod_proxy.so
> #11 0x00007ffb9a40ec62 in ap_proxy_pre_request () from /usr/jboss/jbossweb/shopapp_ws_m01/modules/mod_proxy.so
> #12 0x00007ffb9a40a7dc in proxy_handler () from /usr/jboss/jbossweb/shopapp_ws_m01/modules/mod_proxy.so
> ...
> (gdb) disassemble
> Dump of assembler code for function proxy_worker_cmp:
> 0x00007ffb90ae5e00 <+0>: mov (%rdi),%rax
> => 0x00007ffb90ae5e03 <+3>: mov 0x18(%rax),%rdi
> (gdb) info registers rax
> rax 0x9e10fcde4a34ed64 -7056862584431645340
> (gdb) x 0x9e10fcde4a34ed64
> 0x9e10fcde4a34ed64: Cannot access memory at address 0x9e10fcde4a34ed64
> (gdb)
> {code}
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 4 months
[JBoss JIRA] (MODCLUSTER-675) Failover scenario is not performed with httpd balancer - balancer fails to respond
by Jean-Frederic Clere (Jira)
[ https://issues.jboss.org/browse/MODCLUSTER-675?page=com.atlassian.jira.pl... ]
Jean-Frederic Clere reopened MODCLUSTER-675:
--------------------------------------------
In fact it is JBCS-798
> Failover scenario is not performed with httpd balancer - balancer fails to respond
> ----------------------------------------------------------------------------------
>
> Key: MODCLUSTER-675
> URL: https://issues.jboss.org/browse/MODCLUSTER-675
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.8.Final
> Environment: RHEL 7, CentOS 7 on x86_64 platform
> Reporter: Jan Kasik
> Assignee: Jean-Frederic Clere
> Priority: Blocker
> Attachments: error_log, httpd.zip, wildfly-16.0.0.Beta1-SNAPSHOT.zip
>
>
> When second request in failover scenario is made to check whether the failover was done, server fails to respond. With Undertow as a balancer, this behavior cannot be reproduced. When I replace the jvmroute part by worker which is still alive, I got expected response.
> {noformat}# curl -v --cookie "JSESSIONID=54yxdncGr5im0fBLqIIUMon0klbS66X16aYC_cVW.jboss-eap-7.2-3" http://172.17.0.2:2080/clusterbench/jvmroute
> * About to connect() to 172.17.0.2 port 2080 (#0)
> * Trying 172.17.0.2...
> * Connected to 172.17.0.2 (172.17.0.2) port 2080 (#0)
> > GET /clusterbench/jvmroute HTTP/1.1
> > User-Agent: curl/7.29.0
> > Host: 172.17.0.2:2080
> > Accept: */*
> > Cookie: JSESSIONID=54yxdncGr5im0fBLqIIUMon0klbS66X16aYC_cVW.jboss-eap-7.2-3
> >
> * Empty reply from server
> * Connection #0 to host 172.17.0.2 left intact
> curl: (52) Empty reply from server
> {noformat}
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 4 months
[JBoss JIRA] (MODCLUSTER-697) Segfault when DeterministicFailover On
by Radoslav Husar (Jira)
[ https://issues.jboss.org/browse/MODCLUSTER-697?page=com.atlassian.jira.pl... ]
Radoslav Husar updated MODCLUSTER-697:
--------------------------------------
Priority: Blocker (was: Critical)
> Segfault when DeterministicFailover On
> --------------------------------------
>
> Key: MODCLUSTER-697
> URL: https://issues.jboss.org/browse/MODCLUSTER-697
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.8.Final
> Environment: jbcs-httpd24-mod_cluster-native-1.3.8-1.Final_redhat_1.jbcs.el7.x86_64
> Reporter: Hisanobu Okuda
> Assignee: Radoslav Husar
> Priority: Blocker
>
> When DeterministicFailover On, httpd segfaults:
> {code}
> (gdb) bt
> #0 proxy_worker_cmp (a=0x7ffb18002438, b=0x7ffb18002470) at mod_proxy_cluster.c:154
> #1 0x00007ffba4c87ba3 in msort_with_tmp (p=0x7ffb551b37f0, b=0x7ffb551b3748, n=2) at msort.c:124
> #2 0x00007ffba4c87b02 in msort_with_tmp (n=2, b=0x7ffb551b3748, p=0x7ffb551b37f0) at msort.c:45
> #3 msort_with_tmp (p=0x7ffb551b37f0, b=0x7ffb551b3748, n=5) at msort.c:53
> #4 0x00007ffba4c87ef7 in msort_with_tmp (n=5, b=<optimized out>, p=0x7ffb551b37f0) at msort.c:45
> #5 __GI___qsort_r (b=b@entry=0x7ffb18002438, n=n@entry=5, s=s@entry=56, cmp=cmp@entry=0x7ffb90ae5e00 <proxy_worker_cmp>, arg=arg@entry=0x0) at msort.c:254
> #6 0x00007ffba4c88148 in __GI_qsort (b=b@entry=0x7ffb18002438, n=n@entry=5, s=s@entry=56, cmp=cmp@entry=0x7ffb90ae5e00 <proxy_worker_cmp>) at msort.c:308
> #7 0x00007ffb90ae9ae7 in internal_find_best_byrequests (conf=0x55fe55cacb88, node_table=0x7ffb180108e0, context_table=0x7ffb1800fed8,
> vhost_table=0x7ffb1802baa0, failoverdomain=0, domain=0x0, r=0x7ffb18006a10, balancer=0x55fe56065140) at mod_proxy_cluster.c:2331
> #8 find_best_worker (balancer=0x55fe56065140, conf=conf@entry=0x55fe55cacb88, r=r@entry=0x7ffb18006a10, domain=domain@entry=0x0,
> failoverdomain=failoverdomain@entry=0, vhost_table=vhost_table@entry=0x7ffb1802baa0, context_table=context_table@entry=0x7ffb1800fed8,
> node_table=node_table@entry=0x7ffb180108e0, recurse=recurse@entry=1) at mod_proxy_cluster.c:3484
> #9 0x00007ffb90aed611 in proxy_cluster_pre_request (worker=0x7ffb551b3b38, balancer=0x7ffb551b3b30, r=0x7ffb18006a10, conf=0x55fe55cacb88,
> url=0x7ffb551b3b40) at mod_proxy_cluster.c:3851
> #10 0x00007ffb9a409fa6 in proxy_run_pre_request () from /usr/jboss/jbossweb/shopapp_ws_m01/modules/mod_proxy.so
> #11 0x00007ffb9a40ec62 in ap_proxy_pre_request () from /usr/jboss/jbossweb/shopapp_ws_m01/modules/mod_proxy.so
> #12 0x00007ffb9a40a7dc in proxy_handler () from /usr/jboss/jbossweb/shopapp_ws_m01/modules/mod_proxy.so
> ...
> (gdb) disassemble
> Dump of assembler code for function proxy_worker_cmp:
> 0x00007ffb90ae5e00 <+0>: mov (%rdi),%rax
> => 0x00007ffb90ae5e03 <+3>: mov 0x18(%rax),%rdi
> (gdb) info registers rax
> rax 0x9e10fcde4a34ed64 -7056862584431645340
> (gdb) x 0x9e10fcde4a34ed64
> 0x9e10fcde4a34ed64: Cannot access memory at address 0x9e10fcde4a34ed64
> (gdb)
> {code}
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 4 months