PHP - Curl Timeout - How Long For A Large File?
Hi
I'm currently writing a script that basically downloads videos from a specific page. I am downloading with cURL however with some files, they're so large cURL is timing out. This is causing either a) PHP to timeout b) PHP memory to run out c) cURL to stop once defined timeout limit is reached This means that some files are only partitially downloaded as some files are over 100mb and some are only 20mb I have Code: [Select] set_time_limit(0);and Code: [Select] ini_set("memory_limit","500M");set but is there a way to make it so PHP will not timeout and the cURL session will not timeout until the file is downloaded? Similar TutorialsHello All, I have a simple upload form which I am using to upload files to Box.net using PHP Curl. It works fine for small files, but times out for larger files. Anyone have any suggestions for this? Thanks, Pete Here is the code: Code: [Select] <html> <body bgcolor="black"> <div align="center"> <img src="Homepage_02.jpg" border="0" /> <br> <br> <font color="#f1ca63"; font face="Arial"; font size="5">Upload</font> <br> <br> <?php if (isset($_POST['upload'])) { if (!empty($_FILES['new_file_1']['name'])) { $allowedExtensions = array("txt","csv","xml","css","doc", "docx","xls","xlsx","rtf","ppt","pdf","swf","flv","avi","wmv","mov","jpg","jpeg","gif","png"); foreach ($_FILES as $file) { if ($file['tmp_name'] > '') { if (!in_array(end(explode(".", strtolower($file['name']))),$allowedExtensions)) { echo $file['name'].' is an invalid file type!<br/>'; } else { $temp_name = $_FILES['new_file_1']['name']; $localfile = $_FILES['new_file_1']['tmp_name']; $file = fopen($localfile,'r'); $request_url = 'https://upload.box.net/api/1.0/upload/[Token Here]/[Folder ID]'; $post_params['check_name_conflict_folder_option'] = urlencode('1'); $post_params['new_file_1'] = "@$localfile"; $post_params['description'] = urlencode($_POST['description']); $post_params['uploader_email'] = urlencode($_POST['uploader_email']); $post_params['upload'] = urlencode('upload'); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $request_url); curl_setopt($ch, CURLOPT_VERBOSE, 1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "POST"); curl_setopt($ch, CURLOPT_POSTFIELDS, $post_params); curl_setopt($ch, CURLOPT_TIMEOUT, 300); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); $result = curl_exec($ch); curl_close($ch); $resultArray = explode(' ',$result); if($resultArray[5] != '') { $fileID = substr($resultArray[5],4,-1); $shareName = $temp_name; $link = 'http://www.box.net/shared/'.$shareName; } $renameurl = addslashes("https://www.box.net/api/1.0/rest?action=rename&api_key=[API KEY]&auth_token=[TOKEN Here]&target=file&target_id=".$fileID."&new_name=".$shareName); $renameResult = file_get_contents($renameurl); echo '<font color="white">Upload Successful</font>'; } } } } else { echo '<font color="white">Please select a file</font>'; } } ?> <hr width=600 color=grey> <br> <div align="center"> <form action="box_upload_curl.php" enctype="multipart/form-data" method="post"> <input type="hidden" name="check_name_conflict_folder_option" value="1"/> <table> <tr> <td class="field" style="color: #f1ca63; font-family: Arial; font-size: 14px" width="50%">Choose File to Upload: </td> <td class="input"><input type="file" name="new_file_1" /></td> </tr> <tr> <td class="field field_top" style="color: #f1ca63; font-family: Arial; font-size: 14px" ><br/> Description (optional):</td> <td class="input"><br/><textarea name="description"></textarea></td> </tr> <tr> <td class="field field_top" style="color:#f1ca63; font-family: Arial; font-size: 14px" > <br/> Your e-mail <font color="red">*</font>: </td> <td class="input field_top" style="color: #f1ca63; font-family: Arial; font-size: 14px" > <br/> <input type="text" name="uploader_email" id="email_input"></input> </td> </tr> <tr> <td colspan="2" class="submit" align="center"> <br /> <input type="submit" name="upload" value="Upload" /> </td> </tr> </table> </form> <hr width=600 color="grey"> </div> I'd asked some questions here n answer confirmed stuff n all come to the initial confusion
The problem seems started when I hit "refresh" (when codes changes I usually hit refresh) or there's no activities (means no check whether it's logged or not or do anything in site B)
When it happens, it always prompted to login again...means the check is failing...on the 2nd case it may be caused by timeout setting, but I have no idea on why it's happening by simply refreshing...
Anyone has any idea on what to check in such situation ?
Thanks in advance,
Hi guys, I am currently receiving a large text file ( > 500mb), once per week which I have been manually splitting then processing to obtain the required CSV files. However, this is taking in the region of 2 to 3 hours. Very soon, these files will be sent daily and I really dont have the time to split and process this everyday I have been playing for a while to try and parse everything properly/automatically with fopen, feof and fgets ( and other 'f' options), but the script never seems to read the file all the way to the end - I assume this is due to memory usage. The data received in the file follows a strict pattern throughout the file which is: Code: [Select] BSNY990141112271112270100000 POO2C35 122354000 DMUS 075 O BX NTY LOLANCSTR 1132 11322 TB LIMORCMSJ 1135 00000000 LICRNFNJN 1140 00000000 H LICRNF 1141H1142H 11421142 T LISDAL 1147H1148H 11481148 T LIARNSIDE 1152H1153 11531153 T LIGOVS 1158 1159 11581159 T LIKTBK 1202 1202H 12021202 T LICARK 1206 1207 12061207 T LIULVRSTN 1214H1215H 12151215 T LIDALTON 1223 1223H 12231223 T LIDALTONJ 1225 00000000 LIROOSE 1229 1229H 12291229 T 2 LTBAROW 1237 12391 TF That is just one record of informaton (1 of around 140,000 records), each record has no fixed amount of lines but each line in each record is fixed to 80 characters and all lines in each record need to have the same unique 'id', at present, Im using an md5 hash of microtime. The first line of every record starts with 'BS' and the last line of each record starts with 'LT' terminating with 'TF'. All the other stuff between also follows a certain pattern of which I can break down effectively. The record above show one train service schedule, hence why each line in each record needs the same unique id. Anyone got any ideas on how I could process such a file effectively?? Many thanks Dave Hello, Im trying to find a way to check around 500-600 links to check if they are alive. It works fine for 5-6 links but once i add more links it just times out. Is there a way i could process this so it does 1 link at a time or somthing ? <?php include("config.php"); $query = "SELECT * FROM `games` WHERE `r_fileserve` <> \"\" LIMIT 500"; $result = mysql_query($query); while($row=mysql_fetch_assoc($result)) { $link_str = file_get_contents("$row[r_fileserve]"); $pattern = '<input type="hidden" name="download" value="normal"/>'; preg_match($pattern,$link_str,$match); if ($match[0] != null) { echo "Working <br />"; } else { echo "File Down <br />"; } } ?> I'm working on a website that displays data based on when a file is created. The page keeps checking to see whether or not a file exists. As long as the file doesn't exist, the page keeps reloading and checking to see whether or not the file exists until the file finally does exist. Once the file exists, the page goes to another page that displays the contents of the file that it was waiting for. This design works, but I've noticed that it takes a long time for the page to recognize that the file exists. Once the file exists, the page keeps checking and reloading, several times -- even though the file exists. After many reloads, it finally "sees" the file and goes on to the next page. Why does it take so long for my page to recognize the existence of the file? Here's a sample of how the reloading page (the page that keeps checking for the existence of the file) works. <?php header("Cache-Control: no-cache, must-revalidate"); // HTTP/1.1 header("Expires: Sat, 26 Jul 1997 05:00:00 GMT"); // Date in the past session_set_cookie_params(0); session_start(); ?> <html> <head> <meta http-equiv='cache-control' content='no-cache'> <meta http-equiv='expires' content='0'> <meta http-equiv='pragma' content='no-cache'> </head> <body> <p>Processing...</p> <script> myFunction(); function myFunction() { setInterval(function(){ location.reload(); <?php clearstatcache(); if(file_exists($_SESSION['filename'])) { $existance = true; } else { $existance = false; } ?> if("<?php echo $existance ?>") { window.open("done.php","_self"); } else { } }, 5000); } </script> </body> </html> As you can see, I use the "clearstatcache();" statement in order to make sure that the server's information is current before checking the existence of the file. I have also used meta tags, in the HTML, and php headers in order to make sure that nothing is being cached. This is all in an attempt to make sure that it's not looking at old data when it checks to see if the file exists. Despite all of this, the browser keeps reloading, over and over again, usually about ten times, even after the file exists. Any ideas why this is happening? Edited by Maq, 10 July 2014 - 03:52 PM. When I try to upload a file larger than the server's max limit, the following code is not executed. How am I supposed to inform the user that their file is too large? NOTE: I've stripped the code down for this post. Code: [Select] <?php if(isset($_POST['submit'])) { echo "test.."; } ?> <html> <head> <title>Upload Test</title> </head> <body> <form action='' enctype='multipart/form-data' method='POST'> <input type='file' name='file_upload' /> <input type='submit' name='submit' value='upload' /> </form> </body> </html> Hello, I am working on a project that downloads large zip files from server, for small files the script works well and downlaod files successfully, but for larger files like currently we are trying to download a 922MB file it gives us this message (in firefox) and doesn't download any thing. " File not found Firefox can't find the file at http://www.domainname.com/abc.zip " Script to download the file is as below: " $filename = "xyz.mp3; header("Pragma: public"); header("Expires: 0"); header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); header("Content-Type: application/force-download"); header("Content-Type: application/octet-stream"); header("Content-Type: application/download"); header("Content-Disposition: attachment; filename=".basename($filename).";"); header("Content-Transfer-Encoding: binary"); header("Content-Length: ".filesize($filename)); if( !ini_get('safe_mode') ) set_time_limit(360000000); readfile("$filename"); " Please advise what can be issue, if its file size issue then how and where can we increase the limit to solve this issue. pre-thanks, Hello. My script is set to upload files upto 5GB large. For that script I've currently set memory_limit to 5GB. Is it alright? I mean what is the ideal value (for large upload scripts) If you feel, 5GB is large. I can make script to upload 2GB files and set memory_limit accordingly. Also, max_execution_time has been set by me to 86400 currently. Assuming, on a 500Kbps broadband, it would require upto 24 hours to upload a 3-5GB file. Please suggest. Thank you. Going to try and explain this the best I can but I don't really have the best idea on what's happening here. I have a submission form for users to fill out their information and upload an image. I've set the file limit size at 500000 which I assumed would be safe for images at 400k or below. When testing locally, any image that is below that file size gets uploaded successfully. However, when testing on my online host/server.. the submission form and data is successfully entered but the image isn't saved at all. It obviously isn't over the size of the file limit I set because it dooesn't return an error.. it successfuly submits but doesn't save or resize my image. I really have no clue what the problem could be. I went over the variables I set for folder locations to move the image to and everything works fine locally, but once on the host and online, it doesn't happen. Hi I'm learning php and trying to write a script to extract registration information from a large text file. Sadly my meagre knowledge of php is letting me down a bit. It's a case of knowing what you want the script to do but not having the knowlege of how to 'say it'. So i was hoping that if I posted my code here someone could either give me a few pointers on where i am going wrong or suggest a better way. The text file data luckily has a recurring format as follows (for brevity i've only included one entry, which contains made up information): From: bella_done@yahoo.co.uk Sent: 02 February 2011 22:50 To: Jonny tum, patsy fells, dingly bongo Subject: Subject: Fun Run 2010 Categories: Fun Run Name: Bella Donna Address: 14 brondle avenue Postcode: cd83 1rg Phone: 0287343510 Email: bella_don@yahoo.co.uk DOB: 15/11/1945 Half or Full: Full fun run How did you hear: Took part in 2010 As you can see the data has a convenient boundary at the 'from' field and the colon (or so it occurred to me) so I created my script as follows: // the string being analysed $the_string = " From: bella_done@yahoo.co.uk Sent: 02 February 2011 22:50 To: Jonny tum, patsy fells, dingly bongo Subject: Subject: Fun Run 2010 Categories: Fun Run Name: Bella Donna Address: 14 brondle avenue Postcode: cd83 1rg Phone: 0287343510 Email: bella_don@yahoo.co.uk DOB: 15/11/1945 Half or Full: Full fun run How did you hear: Took part in 2010"; // remove all formatting to work with a clean string $clean_string = strip_tags($the_string); // remove form field entries from the data and replace with commas and a ZZZ boundary $remove_fields = array("Categories:" => "","Name:" => ",","Address:" => ",","Postcode:" => ",","Phone:" => ",","Email:" => ",","DOB:" => ",","Half or Full:" => ",","How did you hear:" => ",","From:" => "ZZZ","Sent:" => ",","To:" => ",", ); $new_string = strtr("$clean_string",$remove_fields); // split the data at the boundary ZZZ $string_to_array = explode("ZZZ", $new_string); $new_string2 = implode("</br>",$string_to_array); echo $new_string2; $myFile = "address_list.csv"; $fh = fopen($myFile, 'w') or die("can't open file"); $stringData = $new_string2; fwrite($fh, $stringData); fclose($fh); One major problem is when i write the new data to a csv file the csv contains spacings that cause it to be reproduced in a column form rather than as separate fields for each comma boundary. So can anyone suggest either a) a better way of extracting the data from the text file (doesn't need to be 100% clean and perfect) b) How can i stop the spaces in the csv (i thought i would have fixed this when i stripped the tags from the string at the start??). Any help would be greatly received by a newbie phper. It's my first shot at performing anything moderately taxing so if I've made some blaring oversites I would very much welcome your wisdom! Thank you Drongo I'm trying to utilize a PHP script to parse a large XML file (around 450 MB) to MYSQL database into certain structure and definitions of included XML elements. The problem is that the original script uses file_get_contents and SimpleXMLElement to get it done, but the corn job executed by the server halts due to the volume of the XML file. I'm no PHP expert, so I bought this XMLSplit software and divided the XML into 17 separated XML files each at size of 30 MB, parsed them one by one using the same script. However, the output database was missing a lot of input, and I have serious doubts whether this would be the same output of the original file if left not divided automatically and parsed one by one.
So, I've decided to use XMLReader with this exact PHP script to parse this big XML file, but so far I couldn't manage to simply replace the parsing code and keep other functionality intact.
I'm including the script below, I'd really appreciate if someone helps me to do so.
<?php set_time_limit(0); ini_set('memory_limit', '1024M'); include_once('../db.php'); include_once(DOC_ROOT.'/include/func.php'); mysql_query("TRUNCATE screenshots_list"); mysql_query("TRUNCATE pages"); mysql_query("TRUNCATE page_screenshots"); // This is the part I need help with to change into XMLReader instead of utilized function, to enable parsing of the large XML file correctly (while keeping rest of the script code as is if possible): $xmlstr = file_get_contents('t_info.xml'); $xml = new SimpleXMLElement($xmlstr); foreach ($xml->template as $item) { //print_r($item); $sql = sprintf("REPLACE INTO templates SET id = %d, state = %d, price = %d, exc_price = %d, inserted_date = '%s', update_date = '%s', downloads = %d, type_id = %d, type_name = '%s', is_flash = %d, is_adult = %d, width = '%s', author_id = %d, author_nick = '%s', package_id = %d, is_full_site = %d, is_real_size = %d, keywords = '%s', sources = '%s', description = '%s', software_required = '%s'", $item->id, $item->state, $item->price, $item->exc_price, $item->inserted_date, $item->update_date, $item->downloads, $item->template_type->type_id, $item->template_type->type_name, $item->is_flash, $item->is_adult, $item->width, $item->author->author_id, $item->author->author_nick, $item->package->package_id, $item->is_full_site, $item->is_real_size, $item->keywords, $item->sources, $item->description, $item->software_required); //echo '<br>'.$sql; mysql_query($sql); //print_r($item->screenshots_list->screenshot); foreach ($item->screenshots_list->screenshot as $scr) { $main = (!empty($scr->main_preview)) ? 1 : 0; $small = (!empty($scr->small_preview)) ? 1 : 0; insert_data($item->id, 'screenshots_list', 0, $scr->uri, $scr->filemtime, $main, $small); } foreach ($item->styles->style as $st) { insert_data($item->id, 'styles', $st->style_id, $st->style_name); } foreach ($item->categories->category as $cat) { insert_data($item->id, 'categories', $cat->category_id, $cat->category_name); } foreach ($item->sources_available_list->source as $so) { insert_data($item->id, 'sources_available_list', $so->source_id, ''); } foreach ($item->software_required_list->software as $soft) { insert_data($item->id, 'software_required_list', $soft->software_id, ''); } //print_r($item->pages->page); if (!empty($item->pages->page)) { foreach ($item->pages->page as $p) { mysql_query(sprintf("REPLACE INTO pages SET tpl_id = %d, name = '%s', id = NULL ", $item->id, $p->name)); $page_id = mysql_insert_id(); if (!empty($p->screenshots->scr)) { foreach ($p->screenshots->scr as $psc) { $href = (!empty($psc->href)) ? (string)$psc->href : ''; mysql_query(sprintf("REPLACE INTO page_screenshots SET page_id = %d, description = '%s', uri = '%s', scr_type_id = %d, width = %d, height = %d, href = '%s'", $page_id, $psc->description, $psc->uri, $psc->scr_type_id, $psc->width, $psc->height, $href)); } } } }}?>I'd appreciate your help with that... Hi guys, I did read allot of documentation on the internet about reading/writing/parsing an XML file. I ended up using the following code, because I really have large files (some about 200MB) and regular dom does not work: while ($xml->read()) { switch ($xml->nodeType) { case (XMLReader::ELEMENT): if ($xml->localName == "job") { $node = $xml->expand(); $dom = new DomDocument(); $n = $dom->importNode($node,true); $dom->appendChild($n); $job = simplexml_import_dom($n); The problem I have is a special character error in the xml file, error returned on this line: "$node = $xml->expand();" I am literally banging my head to the wall to find a simple solution to this. I already have a cleaning function, but that can be applied only after the code above. As the file is large, to clean it, I would have to use the same code above to work on partial content at once, so I would have the same special character problem when I would try to read and split the file. I bet I am not the first one to be in this situation, but after about 5 hours of searching on the internet, I cannot do it no more. And I am not a php expert to come up with a new idea. One other thing to do probably would be to split the file into multiple files, and read them after that, without using the XMLReader. But this would ask for a different application. If, for example, on a file where I have an error, I do the reading with simplexml, without using the XMLReader, I don't get the error. But I cannot use simplexml on the files, since file size is variable. I have to use a reliable method that works for all situations. Hopefully someone has an idea to this STUPID situation! Thanks. Is this possible? I have a PHP file that serves a file (which is below the web root). That works fine and the download dialog is presented when directly accessed. Im building and API however, and that file needs to be served via HTTP Request. Im trying to get it working with cURL but right now its just outputting the raw byte data to the screen. Anyone had success or know what I need to try to be able to call that PHP file with cURL and have the browser try to download the file? (i've tried a lot of combinations of settings for this) Code: [Select] $ch = curl_init(); curl_setopt($ch, CURLOPT_HEADER, 1); curl_setopt($ch, CURLOPT_URL,$remote); curl_setopt($ch,CURLOPT_BINARYTRANSFER,1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 0); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $nvp); return curl_exec($ch); // side note I'm hoping the cURL settings don't matter too too much because a C# program will need to request the download. Hi All, I have been having issues for several days uploading a simple pdf file via Curl to one of our service providers servers. I'm doing an is_uploaded_file check and getting a successful response, but the document is not actually being uploaded to the server. Can anyone help?? Please see code below: Code: [Select] $filename = $_FILES['activity_doc']['name']; // Name of the file (including file extension). $upload_path = https://upload_url //Curl session initialized $session = curl_init(); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_USERAGENT, "Mozilla/4.0 (compatible;)"); curl_setopt($session, CURLOPT_URL, $upload_path); curl_setopt($session, CURLOPT_USERPWD, 'username:password'); curl_setopt($session, CURLOPT_POST, 1); $post = array( "file_box"=>"@" . $filename, ); curl_setopt($session, CURLOPT_POSTFIELDS, $post); $response = curl_exec($session); if (is_uploaded_file($_FILES['activity_doc']['tmp_name'])) { echo "File ". $_FILES['activity_doc']['name'] ." uploaded successfully.\n"; } else { echo "Error occurred, file uploaded unsuccessful"; } $req = new SimpleXMLElement($response); print_r($req); curl_close($session); Hi all, I am trying to download a .zip file that is in a protected folder on an external server. If I put the URL directly to the .zip in the address bar and hit go, I am presented a basic http authentication popup for username and password (which I obv have). Could someone point me in the right direction on this one? I have tested and can download .zip files that are not protected with no problem, I guess I just need to figure out how to authenticate, then save the file in one process. Thanks for any and all help, Matt Hello,
I have managed to find, retrieve and save a file using CURL. But I am having to hard code the file extension, is there a way to find the file extension automatically? (it seems the file extension isn't within the download URL)
(also, is there a way of getting the file name so I can save it as the same filename - that would be great)
Thanks for your help,
Stu
p.s. I've tried the pathinfo($url) function, but that gets information out of the download URL rather than the download file.
$url="http://webmail.WEBSITE.com/src/redirect.php"; $cookie="cookie.txt"; $postdata = "login_username=USERNAME&secretkey=PASSWORD&js_autodetect_results=0&just_logged_in=1"; # get the cookie $ch = curl_init(); curl_setopt ($ch, CURLOPT_URL, $url); curl_setopt ($ch, CURLOPT_SSL_VERIFYPEER, FALSE); curl_setopt ($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.6) Gecko/20070725 Firefox/2.0.0.6"); curl_setopt ($ch, CURLOPT_TIMEOUT, 60); curl_setopt ($ch, CURLOPT_FOLLOWLOCATION, 0); curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt ($ch, CURLOPT_COOKIEJAR, $cookie); curl_setopt ($ch, CURLOPT_REFERER, $url); curl_setopt ($ch, CURLOPT_POSTFIELDS, $postdata); curl_setopt ($ch, CURLOPT_POST, 1); $result = curl_exec ($ch); curl_close($ch); $ch = curl_init(); curl_setopt($ch, CURLOPT_COOKIEFILE, $cookie); //read cookies from here curl_setopt($ch, CURLOPT_URL, "http://webmail.WEBSITE.com/src/right_main.php"); curl_setopt($ch, CURLOPT_HEADER, 0); $result = curl_exec($ch); curl_close($ch); # download file $source = "http://webmail.WEBSITE.com/src/download.php?mailbox=INBOX&passed_id=6475&startMessage=1&override_type0=text&override_type1=html&ent_id=2&absolute_dl=true"; $ch = curl_init(); curl_setopt($ch, CURLOPT_COOKIEFILE, $cookie); //read cookies from here curl_setopt($ch, CURLOPT_URL, $source); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_SSLVERSION,3); $data = curl_exec ($ch); $error = curl_error($ch); curl_close ($ch); # !!! The line below needs to be automated !!! $destination = "./files/test.html"; $file = fopen($destination, "w+"); fputs($file, $data); fclose($file); Edited by stubarny, 17 June 2014 - 04:46 PM. Hi.
I'm working on some program. It adds new posts to facebook profiles, fanpages and groups. Everything works well, but when I want to create image uploading it just doesn't work (whitepage as result + file probably isn't even send, because of very short sending time).
Multipart form sniffed with Firefox looks like this.
Spoiler
Content-Type: multipart/form-data; boundary=---------------------------13742240526862 Content-Length: 788662 -----------------------------13742240526862 Content-Disposition: form-data; name="fb_dtsg" AQGlrKAR9le4 -----------------------------13742240526862 Content-Disposition: form-data; name="charset_test" â¬,´,â¬,´,æ°´,Ð,Ð -----------------------------13742240526862 Content-Disposition: form-data; name="file1"; filename="h.jpg" Content-Type: image/jpeg FILE_CONTENTS_HERE -----------------------------13742240526862 Content-Disposition: form-data; name="file2"; filename="" Content-Type: application/octet-stream -----------------------------13742240526862 Content-Disposition: form-data; name="file3"; filename="" Content-Type: application/octet-stream -----------------------------13742240526862 Content-Disposition: form-data; name="caption" -----------------------------13742240526862 Content-Disposition: form-data; name="return_uri" /groups/880573151971764?view=group&fc=photo_upload_success -----------------------------13742240526862 Content-Disposition: form-data; name="return_uri_error" https://m.facebook.com/photos/upload/?target_id=880573151971764&upload_source=composer&cwevent=intent_media&ctype=inline&referrer=group&session_id=5f737e58-0e23-4270-a8af-8bfa1100f3c5&refid=18 -----------------------------13742240526862 Content-Disposition: form-data; name="target" 880573151971764 -----------------------------13742240526862 Content-Disposition: form-data; name="ref" m_upload_pic -----------------------------13742240526862 Content-Disposition: form-data; name="album_fbid" -----------------------------13742240526862 Content-Disposition: form-data; name="csid" 5f737e58-0e23-4270-a8af-8bfa1100f3c5 -----------------------------13742240526862 Content-Disposition: form-data; name="ctype" advanced -----------------------------13742240526862 Content-Disposition: form-data; name="referrer" group -----------------------------13742240526862 Content-Disposition: form-data; name="is_old_composer" 1 -----------------------------13742240526862-- Hello, I'm using curl to grab a new solar image once an hour or so from the Solar Dynamics Observatory (example below). I'm trying to archive new images and am struggling with that. If I download an image, the filetime() function returns the current time since I downloaded it and wrote it to a fresh file. The result is that the file is always "new", even if the image hasn't changed on the SDO website. Do you have an idea on how to check the last modified time of a file through curl or other means so that I'm not downloading duplicate images? Thanks a ton! //fetch image $ch = curl_init("http://www.somewebsite.com/theimage.jpg"); curl_setopt($ch,CURLOPT_FOLLOWLOCATION,1); $file = "../images/latest/theimage.jpg"; $fp = fopen($file, "w"); curl_setopt($ch, CURLOPT_FILE, $fp); curl_setopt($ch, CURLOPT_HEADER, 0); curl_exec($ch); curl_close($ch); fclose($fp); //this part is no good... //get last modified date if (file_exists($file)) { $filetime = filemtime($file); } Hi, This is the code I made to show the problem: $useragent = "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12 ( .NET CLR 3.5.30729)"; $timeout = 10 ; $cookie = tempnam ("/tmp", "CURLCOOKIE"); $post = array('_method'=>"put", 'authenticity_token'=>' zcvcxfsdfvxcv', 'profile_image[a]'=>"@Girl-Next-Door-movie-f01.jpg" ); $ch = curl_init(); curl_setopt( $ch, CURLOPT_USERAGENT, $useragent); curl_setopt($ch, CURLOPT_HEADER, 1); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_HTTPHEADER, array('Expect:')); curl_setopt($ch, CURLOPT_URL, "http://localhost/test.php"); curl_setopt($ch, CURLOPT_COOKIEFILE, $cookie); curl_setopt( $ch, CURLOPT_COOKIEJAR, $cookie ); curl_setopt( $ch, CURLOPT_FOLLOWLOCATION, true ); curl_setopt( $ch, CURLOPT_ENCODING, "" ); curl_setopt( $ch, CURLOPT_RETURNTRANSFER, true ); curl_setopt( $ch, CURLOPT_AUTOREFERER, true ); curl_setopt( $ch, CURLOPT_SSL_VERIFYPEER, false ); # required for https urls curl_setopt( $ch, CURLOPT_CONNECTTIMEOUT, $timeout ); curl_setopt( $ch, CURLOPT_TIMEOUT, $timeout ); curl_setopt($ch, CURLOPT_POSTFIELDS, $post); $html = curl_exec($ch); curl_close($ch); Now this link used above: http://localhost/test.php has this code: print_r($_POST); print_r($_FILES); It simply prints whats in post and files. So the above code displays this on screen: Code: [Select] Array ( [_method] => put [authenticity_token] => zcvcxfsdfvxcv ) Array ( [profile_image] => Array ( [name] => Array ( [a] => Girl-Next-Door-movie-f01.jpg ) [type] => Array ( [a] => image/jpeg ) [tmp_name] => Array ( [a] => /tmp/phppLJPQV ) [error] => Array ( [a] => 0 ) [size] => Array ( [a] => 55377 ) ) ) but we need to modify the code so that it should display this: Code: [Select] Array ( [_method] => put [authenticity_token] => zcvcxfsdfvxcv ) Array ( [profile_image[a]] => Array ( [name] => Girl-Next-Door-movie-f01.jpg [type] => image/jpeg [tmp_name] => /tmp/phppLJPQV [error] => 0 [size] => 55377 ) ) Meaning, it is taking this(profile_image[a]) as an array when we are defining $post because that's how curl recognizes that we want to upload only a file or an array of files on server by http post. So, basically the problem is the name of the input field that is defined as an array (profile_image[a]) on the web page that we are trying to mock. If this(profile_image[a]) was this (profile_image_a)(without []) on that webpage, then it would not have been a problem. So, if you understand, this is basically a syntax problem. I do not know how stop curl from reading 'profile_image[a]' as an array here 'profile_image[a]'=>"@Girl-Next-Door-movie-f01.jpg. I need curl to read 'profile_image[a]' as an string and not array. I have to use [], otherwise I will not be able to mock the webpage as the name will change. It will give error. I hope I explained the problem and also gave you a way to test if you have a solution. Again, if your code starts displaying this: Code: [Select] Array ( [_method] => put [authenticity_token] => zcvcxfsdfvxcv ) Array ( [profile_image[a]] => Array ( [name] => Girl-Next-Door-movie-f01.jpg [type] => image/jpeg [tmp_name] => /tmp/phppLJPQV [error] => 0 [size] => 55377 ) ) ,then we have a solution. Thanks for helping in advance. Regards, Manoj Hey guys, I'm a total newbie here, and just about as a new to php. My issue: I have a very large .html file that contain multiple articles (I actually have a few of these, but we'll start with one for practicality). The article titles are all wrapped in <h2> tags, there are 10 articles in one file. The articles are very simple, just a title wrapped with <h2> and then a few paragraphs wrapped in <p> tags. What I want to know how to do: I want to know if there's a way to open that file, and have each article saved as it's own .html or .txt document (the title & following paragraphs of each article). Ultimately taking my 1 large file, and creating the subsequent 10 smaller files from the articles inside of it. I am having trouble explaining this in text so I'll try to illustrate: I have "Articles.html" - which contains (article1,article2,article3.. ..article10) I want to split "Articles.html" and create "Article1.html", "Article2.html", "Article3.html", etc. Is that possible? Or am I looking at something far more complex than I can imagine at this point - perhaps something I'd be better off doing by hand? Ultimately I intend to stick all these articles into a database, but that's the 2nd part of what I want to do (and I think will be the easier of the tasks). Let me know if you need any additional information in the event my description above is unclear... I simply am having issues figuring out how to separate out the text into individual articles. |