On 01/12/15 03:17, Brian Stansberry wrote:
On 11/30/15 8:56 PM, Lin Gao wrote:
> There are 2 aspects on this issue which we need to concern I think:
>
> 1. How to get the size of the file which will be uploaded
>
> In most cases, it is possible to know the size of the uploaded file,
> in web console, GWT file object has a size property, in CLI, Java codes
> can get the file size by: File.length() method, but in case of CLI:
> 'deploy --url=http://XXX.zip' while the url points to a resource using
> chunked encoding(no Content-Length available) transfer, we don't know the
> size at all when content repository starts to download.
We also may not know it on the client side (CLI process) regardless of
chunked encoding, as the URL is meant to be opened from the server and
there is no requirement that the client be able to access it. So for
this kind of deployment there is no consistent way for the CLI to check
in advance that the deployment will succeed.
>
> 2. How to know that the disk space is enough to hold the file processing
>
> In web console, Undertow will store the file to system temporary directory
> ('/tmp/')first, then content repository will copy it to the it's root
directory
> for processing. In CLI, the content repository will dump the stream to it's root
> directory('data/content/') for processing. Normally, the deployer will
> un-zipped the zip file to 'tmp/vfs/temp/', the size of the exploded files
> will be much bigger than the zip file itself.(consider the zip bomb... ?)
>
> Apparently, the disk space to hold the file processing is at least more than
> double the size of uploaded file. The concrete available disk space is hard to know,
> and it is changing all over the time(thinking of logs).
>
> After reconsideration, I agree with Stuart that adding any kind of limit seems
> like this has the potential to cause more problems than it will solve. :-)
>
> So back to the original question as $subject: Shall we limit size of the
> deployment in WildFly? or do we want to continue on proceeding this issue ?
>
No to both questions. It's not feasible to create a robust feature.
+1 An arbitrary limit does not solve anything and you can be sure it
will break something else for someone.
Good analysis; thanks.
>
> Best Regards
> --
> Lin Gao
> Software Engineer
> JBoss by Red Hat
>
> ----- Original Message -----
>> From: "Jason T. Greene" <jason.greene(a)redhat.com>
>> To: "Lin Gao" <lgao(a)redhat.com>
>> Cc: wildfly-dev(a)lists.jboss.org
>> Sent: Wednesday, November 4, 2015 9:41:36 PM
>> Subject: Re: [wildfly-dev] Shall we limit size of the deployment in WildFly?
>>
>>
>>> On Nov 4, 2015, at 6:48 AM, Jason T. Greene <jason.greene(a)redhat.com>
>>> wrote:
>>>
>>> I do not think we should have a hard coded limit as a constant because that
>>> will prevent legit deployments from working, and you can still run out of
>>> disk space (e.g you have 20k left and you upload a 21k deployment, which
>>> is under 1G)
>>>
>>> We could have an extra check that verifies enough disk space in the add
>>> content management op, however Java does not give us access to quota
>>> information so the best we could do is only verify available disk space
>>> (unless we exec a command for every platform we support).
>>
>> One thing I forgot to get into was domain mode. Runtime verification is
>> straight forward on standalone, but on domain mode the deployment is pulled
>> by arbitrary hosts, which each may have varying disk capacity.
>>
>> So in the host case we can only error later on and after the fact. Hosts try
>> to pull down content in the content repository (where deployments are
>> stored) only when they need them, and this can happen well after a
>> deployment is added. As an example, 3 weeks later you could add a server to
>> a host that's in a server group that requires a download of a deployment,
>> and at that point go over capacity.
> _______________________________________________
> wildfly-dev mailing list
> wildfly-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/wildfly-dev
>