uploading files over 2gb with php and apache

i recently ran into a problem at work where i was implementing a php-based file upload system. we wanted the end users to be able to upload files larger than 2 gigabtyes in size. (whether or not allowing gigs of upload over http is the best choice, this was the desire.)
the first issue is that many browsers themselves can not handle uploading files over 2gb in size. recent versions of safari, chrome, and opera can – so far firefox and internet explorer can not. if you’re trying to use firefox or ie, it ain’t gonna happen even if php can handle it.
unfortunately, even if you’re browser can handle it, php isn’t coded to be able to — there are some int/long issues in the php code. (i only messed with 5.2.x, not 5.3.x.) even if you’re on a 64-bit system and have a version of php compiled with 64-bit, there are several problem areas.
first, the code that converts post_max_size and others from ascii (#K/#M/#G shorthand) to integer stores the resulting value in an int, so when it converts “9G” (for example) and puts the result into this int it will bork the value because 9G in bytes is a larger number than a 32-bit variable can hold.
at first i just made this change, because someone had said that’s all you need to do. but i don’t think they were using apache or the php apache code. with just this change you don’t see some of the obtuse/weird errors you’d see pre-change, but what’ll happen is you’ll see (from the server side) the temp file increasing in size — then when it hits 2gb it will just disappear. the browser will dutifully continue to upload the file until it’s done, but the file won’t be on the server.
so if you’re using apache as your web server, there are some other things you need to change — several other areas of php code used with the apache module, cgi, etc. that need to be changed from int to long.
so…for this to work, you need to edit the php code and compile it by hand (make sure you compile it as 64-bit). here’s a link to a list of diffs:
http://www.archive.org/~tracey/downloads/patches/karmic-64bit-post-large-files.patch
(referenced from this php bug post)
the file above is a diff on 5.2.10 code, but i made the changes by hand to 5.2.17 code and i have uploaded 3.4gb single files through apache/php (which hadn’t worked before the change).

2 comments on “uploading files over 2gb with php and apache”

  1. So – was doing it in php/web browser a good idea? And is it working for you?
    I have a current request to make an “easy, secure file uploader”. Previously, we had used SFTP, but that was too hard for the customers, and some people like FTP better, but then some other people don’t want to use FTP for security reasons.
    I’m thinking a simple perl script might be the best answer, and that is the direction I’ve been heading so far.

  2. the easier for the end user bit (versus using sftp, for example) was the main reason we wanted to stick with a web-based solution. i had coded a simple site in cold fusion years ago that still worked, but cf and/or the code had some size/timeout limits.
    we’ve had the new php-based code in place since march or so, and it’s been working fine. i see people upload 1 to 1.5gb of data in one upload, but almost never above that size. but like i said, with the changes i made to the php codebase and apache module bits i was able to upload 3, 4, and more gb of data in one session.
    the “not a good idea” reference was to the fact that uploading over http isn’t the fastest way to upload data – but as you alluded to, it’s easy for the average end user to understand. the solution is definitely working for us. (although there are rumors of some people wanting to do similar things with terabytes of data…don’t think http upload will be the answer for that. 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *