PHP - Reliable Php Downloads...
Hi all,
I've tried many, many variations of php download script using readfile() fread() fpassthru() etc... but none seem reliable. I can download small files perfectly fine, but say I have a 30 or 90mb zip file the download ocassionally bombs out and says the download is complete even though the full file hasn't been transferred. I have tired it on several browsers with pretty much the same issue. The interesting bit for me is that if I download the same file across different browsers (whether or not they start at the same time or not) on my computer the download stops at the same time (not same point into download) for each browser. Its as if the connection is being reset on my website on a global basis... The important part of my script as it is at the moment: Code: [Select] // resumable download? $is_resume = TRUE; //Gather relevant info about file $size = filesize($path); $fileinfo = pathinfo($path); @ini_set('magic_quotes_runtime', 0); set_time_limit(0); apache_setenv('no-gzip', '1'); mb_http_output("pass"); // required for IE, otherwise Content-disposition is ignored if(ini_get('zlib.output_compression')) { ini_set('zlib.output_compression', 'Off'); } //workaround for IE filename bug with multiple periods / multiple dots in filename //that adds square brackets to filename - eg. setup.abc.exe becomes setup[1].abc.exe $filename = (strstr($_SERVER['HTTP_USER_AGENT'], 'MSIE')) ? preg_replace('/\./', '%2e', $fileinfo['basename'], substr_count($fileinfo['basename'], '.') - 1) : $fileinfo['basename']; $file_extension = strtolower($fileinfo['extension']); //This will set the Content-Type to the appropriate setting for the file switch($file_extension) { case 'zip': $ctype='application/zip'; break; case 'pdf': $ctype='application/pdf'; break; default: $ctype='application/force-download'; } //check if http_range is sent by browser (or download manager) if($is_resume && isset($_SERVER['HTTP_RANGE'])) { list($size_unit, $range_orig) = explode('=', $_SERVER['HTTP_RANGE'], 2); if ($size_unit == 'bytes') { //multiple ranges could be specified at the same time, but for simplicity only serve the first range //http://tools.ietf.org/id/draft-ietf-http-range-retrieval-00.txt list($range, $extra_ranges) = explode(',', $range_orig, 2); } else { $range = ''; } } else { $range = ''; } //figure out download piece from range (if set) list($seek_start, $seek_end) = explode('-', $range, 2); //set start and end based on range (if set), else set defaults //also check for invalid ranges. $seek_end = (empty($seek_end)) ? ($size - 1) : min(abs(intval($seek_end)),($size - 1)); $seek_start = (empty($seek_start) || $seek_end < abs(intval($seek_start))) ? 0 : max(abs(intval($seek_start)),0); //add headers if resumable if ($is_resume) { //Only send partial content header if downloading a piece of the file (IE workaround) if ($seek_start > 0 || $seek_end < ($size - 1)) { header('HTTP/1.1 206 Partial Content'); } header('Accept-Ranges: bytes'); header('Content-Range: bytes '.$seek_start.'-'.$seek_end.'/'.$size); } header("Cache-Control: cache, must-revalidate"); header("Pragma: public"); header('Content-Type: ' . $ctype); header("Content-Disposition: attachment; filename=\"".$filename."\""); header('Content-Length: '.($seek_end - $seek_start + 1)); //open the file $fp = fopen($path, 'rb'); //seek to start of missing part fseek($fp, $seek_start); //start buffered download while(!feof($fp)) { //reset time limit for big files set_time_limit(0); print(fread($fp, 1024*8)); flush(); ob_flush(); } fclose($fp); exit(); I did have the following section in replace of the above fread section up until this morning (but same issue): Code: [Select] // http headers for zip downloads header("Pragma: public"); header("Expires: 0"); header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); header("Cache-Control: public"); header("Content-Description: File Transfer"); header("Content-type: application/octet-stream"); header("Content-Disposition: attachment; filename=\"".$filename."\""); header("Content-Transfer-Encoding: binary"); header('Content-Length: '.$size); ob_end_flush(); @readfile($path); I have tried different headers and so forth, but all generally seem to bomb out occassionally before the download completes. But they don't bomb out at a particular point in the file size (ie at 25mb). Just seems to work and then die at the same time across different browsers, even when they are started at different times. Very strange and I've spent days modifying the script and still no answers. Sometimes the files download fine, but bomb out too many times for it to be satifactory to leave. Any help or pointers would be much appreciated. Similar TutorialsWhich is more reliable (functionality wise - not compatibility wise). Using Javascript (new Date(); function to detect user/visitor time automatically) or PHP's (Date and Time extension - DateTime:: - where the user/visitor selects their timezone themselfs) To retrieves the users/visitors time? As mentioned in subject, how reliable the following function is if I want to display time according to local timezone? putenv ('TZ=MYTIMEZONE'); Where MYTIMEZONE is from http://www.php.net/manual/en/timezones.asia.php I launched my new website about a month ago. I switched from one web host to another due to poor hosting performance. Now I'm running into the same issue again -- poor web hosting performance.
My first web host was Hostgator. My current web host is AT&T. I hate the thought of switching to a different web host every month trying to find one that will reliably host my site. Does anyone here have a reliable web host that they use and would recommend?
My question is relative since what is reliable for a simple web site, may not be reliable for one that is more complex. For this reason, I can't simply trust web host reviews.
My website isn't overly complicated, but it's more complex than just basic HTML. It uses a lot of PHP, as well as a MySQL database that only has two small tables. The website uploads and downloads small text files regularly. It also sends E-mail attatchments quite often.
Because I just launched, my website isn't getting a ton of traffic -- about 10 users per day. However, I'm beginning to run into the same problem as before. My web host's server is starting to show itself as being unreliable. As with my first web host, it seems as if it may be due to overcrowding on the shared server.
Do any of you run any moderately complex websites? If so, who do you use for a reliable web host?
I've considered setting up my own server with a LAMP configuration and hosting the site myself. However, I don't know a lot about Linux or Apache, and so would like to avoid this. But because the computer would only be hosting my own website, and no one else's, I have to believe that a LAMP setup would be more reliable than a shared server that is overcrowded.
A reliable web host is really what I'm looking for. But I don't want to keep going down the road of trial and error. If anyone uses a web host that reliably supports their moderately-complex website, then I would love to hear from you. I'm sick of my site failing due to server issues. Like the Duracel commercial says, "It just has to work!"
Please forgive me if you feel that my post doesn't correctly fit the forum category. I tried to figure out which category best fits this topic, but none of them seemed to be perfectly suitable.
Thank you for your time, as well as for any suggestions.
Hi ! I am currently trying to develop a PHP application in which my server downloads a file and the user can do the same almost simultaneously. I already think about the problem "If the user downloads fastly than the server...", but it's not a problem at this moment. To do so, I used the header and readfile functions of php. Here is my code : Code: [Select] header('Content-Type: application/octet-stream'); header('Content-Disposition: attachment; filename="'.$data['name'].'";'); header('Content-Transfer-Encoding: binary'); header('Content-Length: '.$data['size']); readfile($remoteFile); I must to use the Content-length header to set the proper size of the file and not the size that is downloaded when the user clicks on the link. However, after some seconds or minutes, download is stopped and I need to restart... If you think about a solution, even if it didn't use the header(); function, please tell me. Thank you in advance... Not sure if I'm putting this in the correct spot, but, Can anyone give me a link or an idea of how to fix Drive-By downloads I have on a site I have created? Everything I search for online brings up how to repair an issue someone may have gotten on their computer from a drive-by download, but nothing talks about repairing it on the actual site itself. I don't have any injections or anything, but doing a check on my site from the norton site shows that I have 1 Threat and it's on a contact form page due to Drive-By Downloads, is this from the php tags on the form not being stripped or from the javascript possibly? Some old lady submitted my site to http://safeweb.norton.com/report/show?url=gotmine.com, now I have to fix it I guess. Any help would be great, Thanks. I'm creating a CSV file on the fly for download in the backend - a stock management function. The script works fine extracting from the database fine and creates the CSV. But the next step of the script is to download the file using: header('Content-type: application/octet-stream'); header('Content-Disposition: attachment; filename="' . $filename . '"'); This works, in as much as it downloads the file specified by $filename but when I open the file it's empty and the File Size in properties is 0. But if I go into my FTP and download the file it contains all the data as expected? Anyone have any ideas whats wrong? Thoughts I've had: File size too large would using ob_start('ob_gzhandler'); help? Or is there an INI setting? File encoding needs to be set? UTF8? Content Type should be text/csv? Thanks Is there a way (in PHP) to prompt to download file regardless of it's type? I have taken someone's suggestions and put files in a folder with HTAccess protection. ALL I want to do is just grab the files and prompt download (regardless of what the type of file is), is there an easy way to do this in PHP. This upload form code works successfully, where a file is chosen, renamed and stored in the 'upload' folder, So, I have a script that works downloading a file - excerpt below: Code: [Select] header("Pragma: public"); header("Expires: 0"); header('Content-type: "application/octet-stream"'); header("Cache-Control: private",false); header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); $f = "Content-Disposition: attachment; filename=\"".$myfile['downloadname'].".".$myfile['ext']."\""; header($f); header("Content-Transfer-Encoding: binary"); header("Content-Length: ".filesize($file)); ob_clean(); flush(); readfile("$file"); exit(); I record elsewhere in a mysql database when the script is started. In addition, I'd like to be able to record whether the download completes (successfully or otherwise). As it is, the script can run and the user press cancel and it looks like a download but isn't. Is this possible? If so, how? Your help, as ever, gratefully received. I would like to be able to add a yes/no to a field in a 'user' database when a logged in user clicks on a link to download a file. The site I have built stores docs and applications. and my client would like to know which user has downloaded a certain application (clicked on a button to 'download' and installer file. Is this possible? Hi, Firstly, apologies if this is in the wrong place in the forum to post this. I have a website, I use PHP in various places to pull/push info from/to a mySQL database, i'm fairly happy with PHP and its uses. I've recently started to use the Google checkout widget on the site for a proposed webstore (to go live at a later date), this has the ability to 'sell' download URL's or keys. I like the idea of a customer being able to purchase multiple MP3 downloads and them being given a URL that gives them the links for the MP3's selected. This is probably beyond anything i've ever done with PHP though so I need to understand the architecture........I understand that, given a list of items from the store, I can run a query to display a list of stored URL's for those items - and I'm sure I can find code that will give me a random URL that will 'expire' after an amount of time. My question is, how to get the list of items ordered from the google checkout?? I'm sure this has something to do with API's - but I don't know what they are, presumably there is some kind of table in Google that I can tap into to get the product info?? any help would be very much appreciated. ....oh, and i'm sure people are going to ask "why are you using Google checkout", the answer being, for me, it seems simple, effective and free! (but i'm open to suggestions.) Darren Hi, I have downloads that are available for users. They can click the links to download the file. Once they do this, I have a query that updates the number of downloads that file has received. I need to create a query that adds up all the rows Total Downloads. So for example: i have 3 rows: Name - Total Downloads First - 2 Second - 23 Third - 7 Using the query for these rows would give me an amount of 32. Any idea how to do this? I don't want to have to create a new column to find the total number of downloads. Any other ways? |