Jason,

    Would you like me to take a look/confirm where the issue is or are we planning on using Ike's patch (I would vote for Ike's patch, as the Apache library is most likely much more robust than my cobbled together solution ;) ).

Thanks,

--Jonathan

On Fri, Apr 15, 2011 at 9:55 AM, Jason T. Greene <jason.greene@redhat.com> wrote:
Yeah glancing at the code the problem is likely in the header parsing.


On 4/15/11 7:17 AM, Jonathan Pearlin wrote:
Ike,

    One more thing to add:  when I wrote the smoke tests for the upload
API, I did notice a similar issue.  However, it turned out that I was
not creating the multipart POST correctly (an error in how I was
programmatically building the multipart was cauing the parsing to enter
into an infinite loop and run out of memory).  However, once I fixed the
test code to produce a valid multipart post, everything was fine.  In my
normal development testing, I tested by uploading the Hudson WAR file
(which is about 34 MB and much larger than the one you see the issue
with).  However, while it is possible that this issue is due to whatever
is submitting the upload request not following the multipart
specification, the code probably does need to be changed to handle
multipart requests that do not contain completely valid data (if we
cannot use the Apache library).  I would be curious to see what the POST
looks like that is causing the issue (in terms of the mutlipart
boundary, header and payload).  It is fairly easy to modify the server
code to dump the incoming POST request to a file to see if it is at all
different than what is expected.

Thanks,

--Jonathan

On Fri, Apr 15, 2011 at 8:07 AM, Jonathan Pearlin <jpearlin1@gmail.com
<mailto:jpearlin1@gmail.com>> wrote:

   Ike,

        I actually looked at using the Apache library (instead of
   writing my own) when I started, but did not follow that path simply
   because of the potential licensing issues.  However, if Jason agrees
   that it is okay to go down that path, I would love to replace the
   custom multipart parsing code with a more mature library.  That
   being said, the multipart parsing currently relies on a library
   Jason wrote a while back to treat the incoming stream as multipart
   data.  I can certainly try digging into this to see if I can
   pinpoint what is causing the OOM issue (if it is in the HTTP server
   code itself or in the multipart stream class).

   Thanks,

   --Jonathan


   On Fri, Apr 15, 2011 at 7:50 AM, Heiko Braun <hbraun@redhat.com
   <mailto:hbraun@redhat.com>> wrote:




       I've run into a OOM issue when uploading content through the
       HTTP API again.
       (https://issues.jboss.org/browse/JBAS-9268)


       I did take a look at the current implementation and propose that
       we change it in the following way:

       - read multipart contents from mime boundary
       - replace the custom multipart stream implementation with a
       mature one (leans on apache commons fileupload)

       I've got this changes  tested and verified in a custom branch:
       https://github.com/heiko-braun/jboss-as/commits/out_of_memory

       However before going ahead, I would like get some feedback from

       a) the original author (Jonathan Pearlin. Welcome onboard btw)
       b) Jason wrt the Apache License sources (See
       org.jboss.as.domain.http.server.multipart.asf.*)


       Regards, Ike




       _______________________________________________
       jboss-as7-dev mailing list
       jboss-as7-dev@lists.jboss.org <mailto:jboss-as7-dev@lists.jboss.org>


--
Jason T. Greene
JBoss, a division of Red Hat