PHP - Large File To Csv
Hi guys,
I am currently receiving a large text file ( > 500mb), once per week which I have been manually splitting then processing to obtain the required CSV files. However, this is taking in the region of 2 to 3 hours. Very soon, these files will be sent daily and I really dont have the time to split and process this everyday I have been playing for a while to try and parse everything properly/automatically with fopen, feof and fgets ( and other 'f' options), but the script never seems to read the file all the way to the end - I assume this is due to memory usage. The data received in the file follows a strict pattern throughout the file which is: Code: [Select] BSNY990141112271112270100000 POO2C35 122354000 DMUS 075 O BX NTY LOLANCSTR 1132 11322 TB LIMORCMSJ 1135 00000000 LICRNFNJN 1140 00000000 H LICRNF 1141H1142H 11421142 T LISDAL 1147H1148H 11481148 T LIARNSIDE 1152H1153 11531153 T LIGOVS 1158 1159 11581159 T LIKTBK 1202 1202H 12021202 T LICARK 1206 1207 12061207 T LIULVRSTN 1214H1215H 12151215 T LIDALTON 1223 1223H 12231223 T LIDALTONJ 1225 00000000 LIROOSE 1229 1229H 12291229 T 2 LTBAROW 1237 12391 TF That is just one record of informaton (1 of around 140,000 records), each record has no fixed amount of lines but each line in each record is fixed to 80 characters and all lines in each record need to have the same unique 'id', at present, Im using an md5 hash of microtime. The first line of every record starts with 'BS' and the last line of each record starts with 'LT' terminating with 'TF'. All the other stuff between also follows a certain pattern of which I can break down effectively. The record above show one train service schedule, hence why each line in each record needs the same unique id. Anyone got any ideas on how I could process such a file effectively?? Many thanks Dave Similar TutorialsHello, Im trying to find a way to check around 500-600 links to check if they are alive. It works fine for 5-6 links but once i add more links it just times out. Is there a way i could process this so it does 1 link at a time or somthing ? <?php include("config.php"); $query = "SELECT * FROM `games` WHERE `r_fileserve` <> \"\" LIMIT 500"; $result = mysql_query($query); while($row=mysql_fetch_assoc($result)) { $link_str = file_get_contents("$row[r_fileserve]"); $pattern = '<input type="hidden" name="download" value="normal"/>'; preg_match($pattern,$link_str,$match); if ($match[0] != null) { echo "Working <br />"; } else { echo "File Down <br />"; } } ?> Hello, I am working on a project that downloads large zip files from server, for small files the script works well and downlaod files successfully, but for larger files like currently we are trying to download a 922MB file it gives us this message (in firefox) and doesn't download any thing. " File not found Firefox can't find the file at http://www.domainname.com/abc.zip " Script to download the file is as below: " $filename = "xyz.mp3; header("Pragma: public"); header("Expires: 0"); header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); header("Content-Type: application/force-download"); header("Content-Type: application/octet-stream"); header("Content-Type: application/download"); header("Content-Disposition: attachment; filename=".basename($filename).";"); header("Content-Transfer-Encoding: binary"); header("Content-Length: ".filesize($filename)); if( !ini_get('safe_mode') ) set_time_limit(360000000); readfile("$filename"); " Please advise what can be issue, if its file size issue then how and where can we increase the limit to solve this issue. pre-thanks, When I try to upload a file larger than the server's max limit, the following code is not executed. How am I supposed to inform the user that their file is too large? NOTE: I've stripped the code down for this post. Code: [Select] <?php if(isset($_POST['submit'])) { echo "test.."; } ?> <html> <head> <title>Upload Test</title> </head> <body> <form action='' enctype='multipart/form-data' method='POST'> <input type='file' name='file_upload' /> <input type='submit' name='submit' value='upload' /> </form> </body> </html> Hello. My script is set to upload files upto 5GB large. For that script I've currently set memory_limit to 5GB. Is it alright? I mean what is the ideal value (for large upload scripts) If you feel, 5GB is large. I can make script to upload 2GB files and set memory_limit accordingly. Also, max_execution_time has been set by me to 86400 currently. Assuming, on a 500Kbps broadband, it would require upto 24 hours to upload a 3-5GB file. Please suggest. Thank you. Hi I'm currently writing a script that basically downloads videos from a specific page. I am downloading with cURL however with some files, they're so large cURL is timing out. This is causing either a) PHP to timeout b) PHP memory to run out c) cURL to stop once defined timeout limit is reached This means that some files are only partitially downloaded as some files are over 100mb and some are only 20mb I have Code: [Select] set_time_limit(0);and Code: [Select] ini_set("memory_limit","500M");set but is there a way to make it so PHP will not timeout and the cURL session will not timeout until the file is downloaded? Going to try and explain this the best I can but I don't really have the best idea on what's happening here. I have a submission form for users to fill out their information and upload an image. I've set the file limit size at 500000 which I assumed would be safe for images at 400k or below. When testing locally, any image that is below that file size gets uploaded successfully. However, when testing on my online host/server.. the submission form and data is successfully entered but the image isn't saved at all. It obviously isn't over the size of the file limit I set because it dooesn't return an error.. it successfuly submits but doesn't save or resize my image. I really have no clue what the problem could be. I went over the variables I set for folder locations to move the image to and everything works fine locally, but once on the host and online, it doesn't happen. Hi I'm learning php and trying to write a script to extract registration information from a large text file. Sadly my meagre knowledge of php is letting me down a bit. It's a case of knowing what you want the script to do but not having the knowlege of how to 'say it'. So i was hoping that if I posted my code here someone could either give me a few pointers on where i am going wrong or suggest a better way. The text file data luckily has a recurring format as follows (for brevity i've only included one entry, which contains made up information): From: bella_done@yahoo.co.uk Sent: 02 February 2011 22:50 To: Jonny tum, patsy fells, dingly bongo Subject: Subject: Fun Run 2010 Categories: Fun Run Name: Bella Donna Address: 14 brondle avenue Postcode: cd83 1rg Phone: 0287343510 Email: bella_don@yahoo.co.uk DOB: 15/11/1945 Half or Full: Full fun run How did you hear: Took part in 2010 As you can see the data has a convenient boundary at the 'from' field and the colon (or so it occurred to me) so I created my script as follows: // the string being analysed $the_string = " From: bella_done@yahoo.co.uk Sent: 02 February 2011 22:50 To: Jonny tum, patsy fells, dingly bongo Subject: Subject: Fun Run 2010 Categories: Fun Run Name: Bella Donna Address: 14 brondle avenue Postcode: cd83 1rg Phone: 0287343510 Email: bella_don@yahoo.co.uk DOB: 15/11/1945 Half or Full: Full fun run How did you hear: Took part in 2010"; // remove all formatting to work with a clean string $clean_string = strip_tags($the_string); // remove form field entries from the data and replace with commas and a ZZZ boundary $remove_fields = array("Categories:" => "","Name:" => ",","Address:" => ",","Postcode:" => ",","Phone:" => ",","Email:" => ",","DOB:" => ",","Half or Full:" => ",","How did you hear:" => ",","From:" => "ZZZ","Sent:" => ",","To:" => ",", ); $new_string = strtr("$clean_string",$remove_fields); // split the data at the boundary ZZZ $string_to_array = explode("ZZZ", $new_string); $new_string2 = implode("</br>",$string_to_array); echo $new_string2; $myFile = "address_list.csv"; $fh = fopen($myFile, 'w') or die("can't open file"); $stringData = $new_string2; fwrite($fh, $stringData); fclose($fh); One major problem is when i write the new data to a csv file the csv contains spacings that cause it to be reproduced in a column form rather than as separate fields for each comma boundary. So can anyone suggest either a) a better way of extracting the data from the text file (doesn't need to be 100% clean and perfect) b) How can i stop the spaces in the csv (i thought i would have fixed this when i stripped the tags from the string at the start??). Any help would be greatly received by a newbie phper. It's my first shot at performing anything moderately taxing so if I've made some blaring oversites I would very much welcome your wisdom! Thank you Drongo I'm trying to utilize a PHP script to parse a large XML file (around 450 MB) to MYSQL database into certain structure and definitions of included XML elements. The problem is that the original script uses file_get_contents and SimpleXMLElement to get it done, but the corn job executed by the server halts due to the volume of the XML file. I'm no PHP expert, so I bought this XMLSplit software and divided the XML into 17 separated XML files each at size of 30 MB, parsed them one by one using the same script. However, the output database was missing a lot of input, and I have serious doubts whether this would be the same output of the original file if left not divided automatically and parsed one by one.
So, I've decided to use XMLReader with this exact PHP script to parse this big XML file, but so far I couldn't manage to simply replace the parsing code and keep other functionality intact.
I'm including the script below, I'd really appreciate if someone helps me to do so.
<?php set_time_limit(0); ini_set('memory_limit', '1024M'); include_once('../db.php'); include_once(DOC_ROOT.'/include/func.php'); mysql_query("TRUNCATE screenshots_list"); mysql_query("TRUNCATE pages"); mysql_query("TRUNCATE page_screenshots"); // This is the part I need help with to change into XMLReader instead of utilized function, to enable parsing of the large XML file correctly (while keeping rest of the script code as is if possible): $xmlstr = file_get_contents('t_info.xml'); $xml = new SimpleXMLElement($xmlstr); foreach ($xml->template as $item) { //print_r($item); $sql = sprintf("REPLACE INTO templates SET id = %d, state = %d, price = %d, exc_price = %d, inserted_date = '%s', update_date = '%s', downloads = %d, type_id = %d, type_name = '%s', is_flash = %d, is_adult = %d, width = '%s', author_id = %d, author_nick = '%s', package_id = %d, is_full_site = %d, is_real_size = %d, keywords = '%s', sources = '%s', description = '%s', software_required = '%s'", $item->id, $item->state, $item->price, $item->exc_price, $item->inserted_date, $item->update_date, $item->downloads, $item->template_type->type_id, $item->template_type->type_name, $item->is_flash, $item->is_adult, $item->width, $item->author->author_id, $item->author->author_nick, $item->package->package_id, $item->is_full_site, $item->is_real_size, $item->keywords, $item->sources, $item->description, $item->software_required); //echo '<br>'.$sql; mysql_query($sql); //print_r($item->screenshots_list->screenshot); foreach ($item->screenshots_list->screenshot as $scr) { $main = (!empty($scr->main_preview)) ? 1 : 0; $small = (!empty($scr->small_preview)) ? 1 : 0; insert_data($item->id, 'screenshots_list', 0, $scr->uri, $scr->filemtime, $main, $small); } foreach ($item->styles->style as $st) { insert_data($item->id, 'styles', $st->style_id, $st->style_name); } foreach ($item->categories->category as $cat) { insert_data($item->id, 'categories', $cat->category_id, $cat->category_name); } foreach ($item->sources_available_list->source as $so) { insert_data($item->id, 'sources_available_list', $so->source_id, ''); } foreach ($item->software_required_list->software as $soft) { insert_data($item->id, 'software_required_list', $soft->software_id, ''); } //print_r($item->pages->page); if (!empty($item->pages->page)) { foreach ($item->pages->page as $p) { mysql_query(sprintf("REPLACE INTO pages SET tpl_id = %d, name = '%s', id = NULL ", $item->id, $p->name)); $page_id = mysql_insert_id(); if (!empty($p->screenshots->scr)) { foreach ($p->screenshots->scr as $psc) { $href = (!empty($psc->href)) ? (string)$psc->href : ''; mysql_query(sprintf("REPLACE INTO page_screenshots SET page_id = %d, description = '%s', uri = '%s', scr_type_id = %d, width = %d, height = %d, href = '%s'", $page_id, $psc->description, $psc->uri, $psc->scr_type_id, $psc->width, $psc->height, $href)); } } } }}?>I'd appreciate your help with that... Hello All, I have a simple upload form which I am using to upload files to Box.net using PHP Curl. It works fine for small files, but times out for larger files. Anyone have any suggestions for this? Thanks, Pete Here is the code: Code: [Select] <html> <body bgcolor="black"> <div align="center"> <img src="Homepage_02.jpg" border="0" /> <br> <br> <font color="#f1ca63"; font face="Arial"; font size="5">Upload</font> <br> <br> <?php if (isset($_POST['upload'])) { if (!empty($_FILES['new_file_1']['name'])) { $allowedExtensions = array("txt","csv","xml","css","doc", "docx","xls","xlsx","rtf","ppt","pdf","swf","flv","avi","wmv","mov","jpg","jpeg","gif","png"); foreach ($_FILES as $file) { if ($file['tmp_name'] > '') { if (!in_array(end(explode(".", strtolower($file['name']))),$allowedExtensions)) { echo $file['name'].' is an invalid file type!<br/>'; } else { $temp_name = $_FILES['new_file_1']['name']; $localfile = $_FILES['new_file_1']['tmp_name']; $file = fopen($localfile,'r'); $request_url = 'https://upload.box.net/api/1.0/upload/[Token Here]/[Folder ID]'; $post_params['check_name_conflict_folder_option'] = urlencode('1'); $post_params['new_file_1'] = "@$localfile"; $post_params['description'] = urlencode($_POST['description']); $post_params['uploader_email'] = urlencode($_POST['uploader_email']); $post_params['upload'] = urlencode('upload'); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $request_url); curl_setopt($ch, CURLOPT_VERBOSE, 1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "POST"); curl_setopt($ch, CURLOPT_POSTFIELDS, $post_params); curl_setopt($ch, CURLOPT_TIMEOUT, 300); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); $result = curl_exec($ch); curl_close($ch); $resultArray = explode(' ',$result); if($resultArray[5] != '') { $fileID = substr($resultArray[5],4,-1); $shareName = $temp_name; $link = 'http://www.box.net/shared/'.$shareName; } $renameurl = addslashes("https://www.box.net/api/1.0/rest?action=rename&api_key=[API KEY]&auth_token=[TOKEN Here]&target=file&target_id=".$fileID."&new_name=".$shareName); $renameResult = file_get_contents($renameurl); echo '<font color="white">Upload Successful</font>'; } } } } else { echo '<font color="white">Please select a file</font>'; } } ?> <hr width=600 color=grey> <br> <div align="center"> <form action="box_upload_curl.php" enctype="multipart/form-data" method="post"> <input type="hidden" name="check_name_conflict_folder_option" value="1"/> <table> <tr> <td class="field" style="color: #f1ca63; font-family: Arial; font-size: 14px" width="50%">Choose File to Upload: </td> <td class="input"><input type="file" name="new_file_1" /></td> </tr> <tr> <td class="field field_top" style="color: #f1ca63; font-family: Arial; font-size: 14px" ><br/> Description (optional):</td> <td class="input"><br/><textarea name="description"></textarea></td> </tr> <tr> <td class="field field_top" style="color:#f1ca63; font-family: Arial; font-size: 14px" > <br/> Your e-mail <font color="red">*</font>: </td> <td class="input field_top" style="color: #f1ca63; font-family: Arial; font-size: 14px" > <br/> <input type="text" name="uploader_email" id="email_input"></input> </td> </tr> <tr> <td colspan="2" class="submit" align="center"> <br /> <input type="submit" name="upload" value="Upload" /> </td> </tr> </table> </form> <hr width=600 color="grey"> </div> Hi guys, I did read allot of documentation on the internet about reading/writing/parsing an XML file. I ended up using the following code, because I really have large files (some about 200MB) and regular dom does not work: while ($xml->read()) { switch ($xml->nodeType) { case (XMLReader::ELEMENT): if ($xml->localName == "job") { $node = $xml->expand(); $dom = new DomDocument(); $n = $dom->importNode($node,true); $dom->appendChild($n); $job = simplexml_import_dom($n); The problem I have is a special character error in the xml file, error returned on this line: "$node = $xml->expand();" I am literally banging my head to the wall to find a simple solution to this. I already have a cleaning function, but that can be applied only after the code above. As the file is large, to clean it, I would have to use the same code above to work on partial content at once, so I would have the same special character problem when I would try to read and split the file. I bet I am not the first one to be in this situation, but after about 5 hours of searching on the internet, I cannot do it no more. And I am not a php expert to come up with a new idea. One other thing to do probably would be to split the file into multiple files, and read them after that, without using the XMLReader. But this would ask for a different application. If, for example, on a file where I have an error, I do the reading with simplexml, without using the XMLReader, I don't get the error. But I cannot use simplexml on the files, since file size is variable. I have to use a reliable method that works for all situations. Hopefully someone has an idea to this STUPID situation! Thanks. Hey guys, I'm a total newbie here, and just about as a new to php. My issue: I have a very large .html file that contain multiple articles (I actually have a few of these, but we'll start with one for practicality). The article titles are all wrapped in <h2> tags, there are 10 articles in one file. The articles are very simple, just a title wrapped with <h2> and then a few paragraphs wrapped in <p> tags. What I want to know how to do: I want to know if there's a way to open that file, and have each article saved as it's own .html or .txt document (the title & following paragraphs of each article). Ultimately taking my 1 large file, and creating the subsequent 10 smaller files from the articles inside of it. I am having trouble explaining this in text so I'll try to illustrate: I have "Articles.html" - which contains (article1,article2,article3.. ..article10) I want to split "Articles.html" and create "Article1.html", "Article2.html", "Article3.html", etc. Is that possible? Or am I looking at something far more complex than I can imagine at this point - perhaps something I'd be better off doing by hand? Ultimately I intend to stick all these articles into a database, but that's the 2nd part of what I want to do (and I think will be the easier of the tasks). Let me know if you need any additional information in the event my description above is unclear... I simply am having issues figuring out how to separate out the text into individual articles. Hi all So... I am creating an import script for putting contacts into a database. The script we had worked ok for 500kb / 20k row CSV files, but anything much bigger than that and it started to run into the max execution limit. Rather than alter this I wish to create something that will run in the background and work as efficiently as possible. So basically the CSV file is uploaded, then you choose if the duplicates should be ignored / overwritten, and you match up the fields in the CSV (by the first line being a field title row), to the fields in the database. The field for the email address is singled out as this is to be checked for duplicates that already exist in the system. It then saves these values, along with the filename, and puts it all into an import queue table, which is processed by a CRON job. Each batch of the CRON job will look in the queue, find the first import that is incomplete, then start work on that file from where it left off last. When the batch is complete it will update the row to give a pointer in the file for the next batch, and update how many contacts were imported / how many duplicates there were So far so good, but when checking for duplicity it is massively slowing down the script. I can run 1000 lines of the file in 0.04 seconds without checking, but with checking that increases to 14-15 seconds, and gets longer the more contacts are in the db. For every line it tries to import its doing a SELECT query on the contact table, and although I am not doing SELECT * its still adding up to a lot of DB activity. One thought was to load every email address in the contacts table into an array before hand, but this table could be massive so thats likely to be just as inefficient. Any ideas on optimising this process? Hi Guys, Theres always been one issue that I just cannot overcome (maybe because I keep pushing it to the backburner). Whenever debugging other peoples code I do the usual var_dump() or print_r() or vars which (sometimes) displays a monolithic array. I just find it very intimdating and can never really figure out whats going on. Is anyone else like this ? Any good reads on how to 'decode' such large arrays ? I have a flash application that talks to upload.php Say I upload a 500mb file; it will obviously take a little while to upload. Will the max_execution_time settings cause this to fail? Its set at 60 right now and the upload is obviously taking longer than 1 minute. Hi All, Having issues uploading files larger than 1mb. This is what I have currently as default when I ran phpinfo() (working locally on my machine)... upload_max_filesize: 432M post_max_size: 432M memory_limit: 8M max_input_time: 60 max_execution_time: 30 I'm looking for the file to be converted into a blob, it works perfectly fine for files less than 1mb, but doesn't even run the mysql query above that. Any Ideas anyone? include("../../connect.php"); # these settings should help set_time_limit(0); # going in as a blob from now on $stamp = mktime(); $safename = $_FILES['Filedata']['tmp_name']; $filename = $_FILES['Filedata']['name']; $size = $_FILES['Filedata']['size']; $type = $_FILES['Filedata']['type']; $fk = $_REQUEST['fk']; $sqlname = $stamp . "-" . $_FILES['Filedata']['name']; # open and code in $fp = fopen($safename, 'r'); $content = fread($fp, filesize($safename)); $content = addslashes($content); fclose($fp); $insertS = "INSERT INTO $tableb (pal, afield, bfield, cfield, dfield, efield, ffield, ablob) VALUES ('6', '$fk', '$filename', '$size', '$type', '$width', '$height', '$content')"; $insertQ = mysql_query($insertS); print "1"; I have an application which takes some time and would like to take steps to improve execution speed. Sample data is provided as JSON and is as follows where the values array has few columns and many rows. My desired outcome is three PHP objects for mean_P51, mean_P55, and max_P56 which all have a reference to the time array as well as their own values array. { "name": "L2", "columns": ["time", "mean_P51", "mean_P55", "max_P56"], "values": [ ["2020-06-13T14:02:02Z", 4.3527550826446255, 5.668302919254657, 0.6175362252066116], ["2020-06-13T14:02:12Z", 4.472219604166665, 5.493282520833331, 0.6095558604166668], ["2020-06-23T14:02:22Z", 4.332343173277662, 5.477678517745302, 0.6014520167014615], ... ["2020-06-23T14:02:22Z", 4.272219604166665, 5.468302919254657, 0.6195558604166668] ] } Originally, I thought that it would be more efficient to iterate over the big values array once and on each iteration process each column (I envision myself walking one mile and snapping my fingers three times each foot, or walking three miles and snapping my fingers once every foot). What I witness, however, is it is faster to iterate over the big values array multiple times to generate each object, and I've done a few simple tests comparing iterating big loops within little loops to little loops within big loops, and get similar results. I guess this makes sense I am not really stopping when I snap my fingers and if I did walking three miles would likely be quicker. Is this expected behavior? If there is too much data, I've needed to use a JSON stream parser and either a generator or iterator. I haven't tested it yet, but expect a generator would be more efficient than an iterator and using PHP's built-in json_decode() and an array would be more efficient than either a generator or iterator. Think I need to test this hypothesis or is it likely correct? Any other general strategies one should take when working with large datasets? Thanks I need to process a large CSV file (40 MB - 300,000 rows). While I have been working with smaller files my existing code is not able to work with large files. All I need to do is - read a particular column from file and then count total number of rows and add all the values from column. My exisitng piece of code imports whole CSV file into an array (a class is used) and then using 'ForEach' loop, reads the required column and values into another array. Once the data is in this array i can simply sum or count it. While this served me well for smaller files, i am not able to use this approach to read a larger php file. I have already increased the memory allocated to php and max_execution_time but the script just keeps on running I am no php expert but usualy get around with trial and error.......your thoughts and help will be greatly appreciated Exisiting code: Once data has been processed by initial class (Class used is freely available and known as 'parsecsv', available at http://code.google.com/p/parsecsv-for-php/) After calling the class and processing the csv file: ?php ini_set('max_execution_time', 3000); $init = array(0); //Initialize a dummy array, which hold value '0' foreach ($csv->data as $key => $col): //Get value from array, 'Data' is an array processed by class and holds csv data $ColValue = $col[SALARY']; //retrieves the column you want { $SAL= $col['SALARY']; //Column that you want to process from csv array_push ($init, $SAL); // Push value into dummy array created above echo "<pre>"; } endforeach; $total_rows = (Count($init) -1); //Count total number of value, '-1' to remove the first initilaized value in array echo "Total # of rows: ". $total_rows . "\n"; echo "Total Sum: ". array_sum($init) . "\n"; ?> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hello All This was an online test, which I didn't do very well with, and am looking for guidance of where I went wrong. The idea is you'd have a matrix $A with values -1, 0, 1 in each position, in rows M and column N where M,N can be up to 1000. The code is evaluated $k times where $k can be up to 1000 as well. Starting at $A[0][0], evaluate the matrix where -1 means you go down a row and +1 means you go right one column. A 0 value means you continue along your previous direction. After exiting a position, that positions value is multiplied by -1: 1 = (-1); (-1) = 1; 0 = 0 Once you exit the matrix either on the right or below, you stop and start over The goal of the code is to return how many times you exited the bottom right corner, ending up below (not to the right) the matrix; When your spot is [M], [N+1] Here is the code I wrote: Code: [Select] function eval_matrix( $A, $k ) { $x = 0; $y = 0; // dir is x or y for up/down // $dir = "y"; $ball_count = 0; // Get columns // $maxX = count($A); // Get rows // $maxY = count($A[0]); // for $k times for($i=0; $i<$k; $i++) { // while we're within the matrix boundaries while($x < $maxX && $y < $maxY) { // get the value of our current position $mode = $A[$x][$y]; // evaluate the value 0,1,-1 switch($mode) { case 0: if($dir =="y") { $y++; } else { $x++; } break; case -1: // Change position value $A[$x][$y] = 1; $y++; $dir = "y"; break; case 1: // Change position value $A[$x][$y] = -1 $x++; $dir = "x"; break; } } if($x == $maxX-1 && $y == $maxY) { $ball_count++; } $x = 0; $y = 0; $dir = "y"; } return $ball_count; } It may be fairly ugly, but I'm looking for help with how to properly code this type of solution. Any pointers, resources, suggestions are greatly appreciated! Hi, i am php programmer , i need help from php expert to create php apllication for large database . I have database table called "profiles" which contains millions(1.5 to 2 million) of profile of the business companies. This table has 10 fields and there is one field named as "bname" which is name of company , i made this column full-text index for full-text search . Now , i have to use this table for profile searching (using full-text search), profiles within particular cities , profiles within particular categories etc. This table contains millions of records so it will take lots of time for searching and fetching the reocrd(s) from this table. Can anybody help me that how can i manage this large table to improve the performance and fast searching with php ? Is there any other technique (algorithm) to manage large database (like facebook,twiiter,orkut)? I need to parse large XML files ranging in size from ~ 500 to ~ 1700 Mb. I use XMLReader Code: [Select] set_time_limit(0); $start_time = microtime(true); include_once 'inc/Misc.php'; include_once 'inc/Database.php'; $files = array('xml/large_file.xml'); foreach($files as $file) { echo "\n"; echo 'Filename: '.basename($file)."\n"; echo 'Filesize: '.convert(filesize($file))."\n"; echo 'Start parsing...'."\n"; echo "\n"; $reader = new XMLReader(); $reader->open($file); while ($reader->read()) { switch ($reader->nodeType) { case (XMLREADER::ELEMENT): if ($reader->localName == "element-name") { $dom = new DomDocument(); $n = $dom->importNode($reader->expand(),true); $dom->appendChild($n); $sxe = simplexml_import_dom($n); $tess->file_big->insert($sxe); echo "Insert done! "; benchmark(); } } } } Everything is fine in the beginning ... Parsed file and slowly inserted my desired data, but is gradually growing memory consumption and has run out of resources. That is, I took the file to 400 Mb and as long as it is spent parsing of 2000 Mb of RAM and all the resources ran out and the script is stopped. How to deal with large files? ~ 500 to ~ 1700 Mb. Will there XML Parser? Yes, and how to apply it to my problem? Another option could have? |