Yeah glancing at the code the problem is likely in the header parsing.
On 4/15/11 7:17 AM, Jonathan Pearlin wrote:
Ike,
One more thing to add: when I wrote the smoke tests for the upload
API, I did notice a similar issue. However, it turned out that I was
not creating the multipart POST correctly (an error in how I was
programmatically building the multipart was cauing the parsing to enter
into an infinite loop and run out of memory). However, once I fixed the
test code to produce a valid multipart post, everything was fine. In my
normal development testing, I tested by uploading the Hudson WAR file
(which is about 34 MB and much larger than the one you see the issue
with). However, while it is possible that this issue is due to whatever
is submitting the upload request not following the multipart
specification, the code probably does need to be changed to handle
multipart requests that do not contain completely valid data (if we
cannot use the Apache library). I would be curious to see what the POST
looks like that is causing the issue (in terms of the mutlipart
boundary, header and payload). It is fairly easy to modify the server
code to dump the incoming POST request to a file to see if it is at all
different than what is expected.
Thanks,
--Jonathan
On Fri, Apr 15, 2011 at 8:07 AM, Jonathan Pearlin <jpearlin1(a)gmail.com
<mailto:jpearlin1@gmail.com>> wrote:
Ike,
I actually looked at using the Apache library (instead of
writing my own) when I started, but did not follow that path simply
because of the potential licensing issues. However, if Jason agrees
that it is okay to go down that path, I would love to replace the
custom multipart parsing code with a more mature library. That
being said, the multipart parsing currently relies on a library
Jason wrote a while back to treat the incoming stream as multipart
data. I can certainly try digging into this to see if I can
pinpoint what is causing the OOM issue (if it is in the HTTP server
code itself or in the multipart stream class).
Thanks,
--Jonathan
On Fri, Apr 15, 2011 at 7:50 AM, Heiko Braun <hbraun(a)redhat.com
<mailto:hbraun@redhat.com>> wrote:
I've run into a OOM issue when uploading content through the
HTTP API again.
(
https://issues.jboss.org/browse/JBAS-9268)
I did take a look at the current implementation and propose that
we change it in the following way:
- read multipart contents from mime boundary
- replace the custom multipart stream implementation with a
mature one (leans on apache commons fileupload)
I've got this changes tested and verified in a custom branch:
https://github.com/heiko-braun/jboss-as/commits/out_of_memory
However before going ahead, I would like get some feedback from
a) the original author (Jonathan Pearlin. Welcome onboard btw)
b) Jason wrt the Apache License sources (See
org.jboss.as.domain.http.server.multipart.asf.*)
Regards, Ike
_______________________________________________
jboss-as7-dev mailing list
jboss-as7-dev(a)lists.jboss.org <mailto:jboss-as7-dev@lists.jboss.org>
https://lists.jboss.org/mailman/listinfo/jboss-as7-dev
_______________________________________________
jboss-as7-dev mailing list
jboss-as7-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/jboss-as7-dev
--
Jason T. Greene
JBoss, a division of Red Hat