PHP - Rest Api Caching
<?php $myData = file_get_contents(""); $myObject = json_decode($myData); $myObjectMap = $myObject->result; ?> can i somehow build in that it only request every 5 minutes because if there are many users on the site it request too much? Similar TutorialsHello, I just had a quick question about caching when the compiler compiles php into assembly (or however this works with php). Anyways, is there any efficiency in doing this: $holder = strlen($anArray); for (i=0 to 100) echo $holder; Rather than this: for (i=0 to 100) echo strlen($anArray); ? If PHP is smart, then strlen($anArray) would be called once, and any subsequent calls to strlen($anArray) would just call a value stored in a cache. Is this how PHP works? Hi! I was little confused and i am not able to figure out how can i use caching server side and client side. How to use the caching using php. Hi I have a php script which allows me to upload images to my product page, when I select a image to copy it simply does a Code: [Select] copy(); function, followed by a Code: [Select] header("Location: products.php"); redirect This all works ok, but when the page reload, the image has not changed, if I refresh the page, it seems to load ok. So I think this is a image caching problem. Any ideas on how to solve this? Thanks let's say I have 100,000 users on my forum i want to cache there PROFILE INFO (About me) in .php files which is easy/etc but would that be more space than 100k rows in a db table, or 100k .php files? just wondering, prob a stupid question but atm I cache some lottery info and some other stuff, but that's only 1 .php it would be dumb to cache info and have 100k .php's for each user ID right? or maybe story it all in 1 .php? would be a HUGE Filesize? rather just keep the data in MYSQL right? I am using a cache script, well it writes a array to a file like this: Code: [Select] fwrite($fh, '<?php'."\n\n".'define(\'PUN_LOTTERY_LOADED\', 1);'."\n\n".'$lottery = '.var_export($output2, true).';'."\n\n".'?>'); output2 is Code: [Select] $result2 = $db->query('MY QUERY '); $output2 = array(); while ($cur_donors = $db->fetch_assoc($result2)) $output2[] = $cur_donors; Now, I want to ditch the mysql and I want to use this script with 7 variables that I already have loaded, so I dont need to use the mysql, how do I add my 7 variables to my var_export function instead of using mysql to loop them? What I have, is a php page that runs over 60 query's a visit, and has over 2000 visits a day. That thousands of query's, and I'm sure this can be simplified easily to lessen the load. I only need to update the data on the page every 12 hours. So, what I'm thinking, is that it would be best to run the query based on time()(every 12 hours), and store that data in a .txt file. Then, the php file, instead of requesting the query over and over, it just extracts the data from the text file. Does this help me at all, or is it useless? is there a better method? Thanks! Hi, just started experimenting with file caching after reading this article: http://www.rooftopsolutions.nl/blog/107 Aside from when dealing with data which changes frequently, I was wondering if it's ever inappropriate to cache your queries. For example, I have a page with a ridiculous number of queries, some of them have a resulting set which is not nearly as large as others. Page performance increased, just not to the level I had been expecting and I wondering if having Apache read each of these files for data is in fact faster than running a number of queries with only some data being cached. How can you prevent your browser from caching a page? I think that my news feed is being cached and is causing new posts to not be displayed even after a refresh. Eventually, the posts will show up after about 20 refreshes. I know my query is right so I didn't know if there was a way to stop page caching. I don't really know if it is a caching issue though. I'm starting to think I just have some weird bug in my code because sometimes on refresh news feed items get removed then i'll refresh again and some will get added back in then i'll refresh again and they'll all be there. Ever heard of this happening? Hello, I am needing to cache a json response from an API so I am not making requests on each load. I have come across some code that is suppose to accomplish this but, it seems to not be working and can't figure out as to why. I created the cache file with 777 but it doesn't write or read from that file when I echo out the results.
The code below is just what is suppose to get the contents and cache it. At the end I tried to print it so I can test it to see it prints the response from the cache file but, nothing and the cache file does exists.
This is the first time I have tried something like this so please be gentle.
// cachePath is name of the path and file used to store cached current conditions gathered from the API request $cachePath = "./cache/nowcast-cache.txt"; // URL to the API request $url = "http://api.wunderground.com/api/XXXXXXXXXXXX/geolookup/conditions/q/TX/mesquite.json"; date_default_timezone_set('America/Chicago'); /** * Request jobs from API * * Split the request into smaller request chunks * and then consolidate them into a single array to limit the API * requests. */ function api_request() { file_put_contents($cachePath, file_get_contents($url)); } /** * API Request Caching * * Use server-side caching to store API request's as JSON at a set * interval, rather than each pageload. * * @arg Argument description and usage info */ function json_cached_api_results( $cache_file = NULL, $expires = NULL ) { global $request_type, $purge_cache, $limit_reached, $request_limit; if( !$cache_file ) $cache_file = dirname(__FILE__) . $cachePath; if( !$expires) $expires = time() - 2*60*60; if( !file_exists($cache_file) ) die("Cache file is missing: $cache_file"); // Check that the file is older than the expire time and that it's not empty if ( filectime($cache_file) < $expires || file_get_contents($cache_file) == '' || $purge_cache && intval($_SESSION['views']) <= $request_limit ) { // File is too old, refresh cache $api_results = api_request(); $json_results = json_encode($api_results); // Remove cache file on error to avoid writing wrong xml if ( $api_results && $json_results ) file_put_contents($cache_file, $json_results); else unlink($cache_file); } else { // Check for the number of purge cache requests to avoid abuse if( intval($_SESSION['views']) >= $request_limit ) $limit_reached = " <span class='error'>Request limit reached ($request_limit). Please try purging the cache later.</span>"; // Fetch cache $json_results = file_get_contents($cache_file); $request_type = 'JSON'; } return json_decode($json_results); } print_r($json_results); Edited by Texan78, 20 December 2014 - 03:51 PM. Hi I just have a question about the caching code below.
Should the date function be in the filename if I want to cache the file for several days? Or if the date changes will a new file be created and used instead of the the previous cache file. Is that not even used since my if statement checks for the file modification time? Thanks for the help in understanding this.
$cachefile = 'cache/cached/'.$id . date('M-d-Y').'.php'; $cachetime = 172800; // Check if the cached file is still fresh. If it is, serve it up and exit. if (file_exists($cachefile) && time() - $cachetime < filemtime($cachefile)) { include($cachefile); exit; } ob_start(); {HTML TO CACHE} $fp = fopen($cachefile, 'w'); fwrite($fp, ob_get_contents()); fclose($fp); // finally send browser output ob_end_flush();I basically want to cache a file for 2-3 days. This topic has been moved to Other. http://www.phpfreaks.com/forums/index.php?topic=345836.0 This topic has been moved to Other Libraries and Frameworks. http://www.phpfreaks.com/forums/index.php?topic=322037.0 I can trying to create a script for caching an XML files. The problem is that this XML file was poorly constructed and I can't do anything about it. XML file: http://www.boj.org.jm/uploads/tbills.xml I would like to have the cached xml file to be restructured like this: Code: [Select] <tbills> <tbill> <title>Announcement</title> <doc>www.boj.org.jm/pdf/tbill_press_release_2010-07-13.doc</doc> <date>Jul 14 2010 12:00am</date> </tbill> <tbill> <title>Results</title> <doc>www.boj.org.jm/pdf/tbill_results_2010-june-23.doc</doc> <date>Jun 23 2010 12:00am</date> </tbill> </tbill> I can find tutorials that works if the source XML data is properly structured, but I can never seem to find one that addresses situations like this. I found this code in a tutorial and tried to edit it to what I would need, but I'm stuck with 2 things. How do I get this to change the structure of the XML when caching How do I use the call script and where do I put it? Caching script (caching.php) Code: [Select] <?php /* * Caching A small PHP class to */ class Caching { var $filePath = ""; var $apiURI = ""; function __construct($filePath, $apiURI) { //check if the file path and api URI are specified, if not: break out of construct. if (strlen($filePath) > 0 && strlen($apiURI) > 0) { //set the local file path and api path $this->filePath = $filePath; $this->apiURI = $apiURI; //does the file need to be updated? if ($this->checkForRenewal()) { //get the data you need $xml = $this->getExternalInfo(); //save the data to your file $this->stripAndSaveFile($xml); } else { //no need to update the file return true; } } else { echo "No file path and / or api URI specified."; return false; } } function checkForRenewal() { //set the caching time (in seconds) $cachetime = (60 * 60 * 24); //one day worth of seconds //get the file time $filetimemod = filemtime($this->filePath) + $cachetime; //if the renewal date is smaller than now, return true; else false (no need for update) if ($filetimemod < time()) { return true; } else { return false; } } function getExternalInfo() { if ($xml = @simplexml_load_file($this->apiURI)) { return $xml; } else { return false; } } function stripAndSaveFile($xml) { //put the artists in an array $tbill = $xml->TBILLS->ANNOUCE; //building the xml object for SimpleXML $output = new SimpleXMLElement("<tbill><title></title></tbill>"); //get only the top 10 for ($i = 0; $i < 10; $i++) { //create a new artist $insert = $output->addChild("artist"); //insert name and playcount childs to the artist $insert->addChild("name", $artists[$i]->name); $insert->addChild("playcount", $artists[$i]->playcount); } //save the xml in the cache file_put_contents($this->filePath, $output->asXML()); } } ?> Calling script (calling.php) Code: [Select] <?php ini_set('display_errors', 1); error_reporting(E_ALL); include('caching.php'); $caching = new Caching($_SERVER['DOCUMENT_ROOT']."/tbills.xml", "http://www.boj.org.jm/uploads/tbills.xml"); ?> Thanks in advance. Best Regards, _________ Winchester I have a client, that wants an affiliate driven service, which is fine. However they want to offer the affiliates the ability to forward there own domains to the service and have that work as the initial affiliate id token. Now my question is. I know I can find what the domain is that the scripts I am writing run off of using $_SERVER['HTTP_HOST'] however I'm not to sure how that would work for a domain that is forwarded with a 301 or 302 redirect status and masked for use on the service I am building up. I want to say I could use $_SERVER['HTTP_REFERER']. But as I said I'm not to sure how thats going to work for a forwarded domain thats masked as its not part of the actual host conf files its landing on in the end. Hopefully I am making sense with the above. So what would be my best choice of options to work with when handling a domain that will be masked and landing on another domain as it will be forwarded. I am only taking the inital landing with a domain and setting the tokens I need set for it to run as that affiliate and setting them in sessions and cookie and various other variables. But I guess I am just wondering which would be better for me to catch that inital landing with the domain thats forwarded. Here is the scenario. I created a niche market API which provides environmental data. The data is obtained by industrial controllers which don't monitor anything by default, and the API has additional endpoints which are used to instruct the industrial controllers to start monitoring some parameter so the API can then start storing the trend data. The API primarily responds with JSON, however, a couple of endpoints support CSV data. For humans/webclients to access the data, I also created a webserver application which can be added to a Concrete5 (C5) CMS. I tried to make as many of its endpoints do nothing except receive the webclient's request, add a JSON Web Token to it which contains a unique identifier, forward it to the API using cURL, and return the response as is to the webclient. To limit the scope which needed to be created for the C5 application, the API has endpoints to return JSON which is used by JavaScript on the webclient for client-side validation, endpoints (actually, a different domain) to return static JavaScript/CSS/etc, and endpoints to restructure the JSON data to some more suitable format. I now am looking at creating different applications which does not use C5, but either are 100% custom or use some other CMS such as Drupal, etc. While I tried to limit the scope implemented on the C5 application, I have much more than I desire and will need to duplicate much for any future application. The primary scope I would like to remove from the webserver application relates to the HTML views and consists of templates which create HTML from JSON, the JavaScript which interacts with the HTML, and to a lesser degree controllers to determine the type of view (i.e. a list of records or one specific detail record). While the API will only be managed by myself, the intent is that the C5, 100% custom, etc webserver apps are installed and managed by others. Ideally, there is some existing Composer package for transforming JSON to HTML which is CMS agnostic, however, I don't know whether such exists. Also, while not 100% necessary, ideally this functionality could exist on my server and not the individual web application's server. Whether I build it myself or use some existing package, I expect it will need to do something like the following: The various webserver apps will have some routing table to proxy the request either to my JSON API server (complete) or my new "HTML API Server" (remaining discusses this portion). This HTML API server would generate the content either by making a cURL request to the main JSON API, or better yet since likely located on the same server have routing functional to make direct calls to the JSON API's methods, and return something like the following: { "html": "<div>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Pharetra massa massa ultricies mi quis hendrerit dolor. Leo integer malesuada nunc vel risus commodo viverra.</div>", "javascript_files": [ "https://resources.myserver.com/someFile.js", "https://resources.myserver.com/someOtherFile.js" ], "css_files": [ "https://resources.myserver.com/someFile.css", "https://resources.myserver.com/someOtherFile.css" ] } The various applications would then take the HTML and resource files and insert them using their proprietary methods. EDIT. Also, thinking of using an IFRAME, however, definitely have concerns with doing so. My day job has nothing to do with software, however, am hoping to make it so. Before embarking on this task, I would like to know whether some existing framework exists and if not what general strategy (organization, caching, etc) I should take to develop it. Thanks Edited January 18 by NotionCommotionAdded IFRAME idea I want to create a REST API for my website. What I want to do is to display advertisements from my database on websites that I sell to clients. Their website will call my database for advertisements to display on their websites. Also, I am performing a site validity check where the client's site would send the site URL, site name, and site token to my database for validation. If it returns false then the site will not display as it will be invalid according to my database. I've been Googling this topic all day, but cannot seem to get my head around it. I want to set an API key on my website so that not just anybody can be querying my database. For the advertisement query, no parameters need to be sent from the client websites to my database. Can anyone offer some advice on how to do this? Hi guys. I'm new to php and struggleing with the REST POST for some reason.... I need to insert a contact into this accouting system http://help.saasu.com/api/#toc-http-post search for Example: Insert Contact. the data strcutre looks like: Code: [Select] <?xml version="1.0" encoding="utf-8"?> <tasks> <insertContact> <contact uid="0"> <salutation>Mr.</salutation> <givenName>John</givenName> <familyName>Smith</familyName> <organisationName>Saasy.tv</organisationName> <organisationAbn>777888999</organisationAbn> <organisationPosition>Director</organisationPosition> <email>john.smith@saasy.tv</email> <mainPhone> 02 9999 9999 </mainPhone> <mobilePhone> 0444 444 444 </mobilePhone> <contactID>XYZ123</contactID> <tags>Gold Prospect, Film</tags> <postalAddress> <street>3/33 Victory Av</street> <city>North Sydney</city> <state>NSW</state> <postCode>2000</postCode> <country>Australia</country> </postalAddress> <otherAddress> <street>111 Elizabeth street</street> <city>Sydney</city> <state>NSW</state> <postCode>2000</postCode> <country>Australia</country> </otherAddress> <isActive>true</isActive> <acceptDirectDeposit>false</acceptDirectDeposit> <directDepositAccountName>John Smith</directDepositAccountName> <directDepositBsb>000000</directDepositBsb> <directDepositAccountNumber>12345678</directDepositAccountNumber> <acceptCheque>false</acceptCheque> <customField1>This is custom field 1</customField1> <customField2>This is custom field 2</customField2> <twitterID>Contact twitter id</twitterID> <skypeID>Contact skype id</skypeID> </contact> </insertContact> </tasks> I only have to insert 3 mandatory fields whcih are in the below code snipit im using but nothings working... Code: [Select] //set POST variables $service_url = 'https://secure.saasu.com/webservices/rest/r1/tasks?wsaccesskey=<key removed>'; $curl = curl_init($service_url); $fileds = array( 'insertContact' => array( 'contact' => array( 'givenName' => urlencode('John'), 'familyName' => urlencode('Smith'), 'organisationName' => urlencode('Saasy.tv') ) ) ); curl_setopt($curl, CURLOPT_RETURNTRANSFER, true); curl_setopt($curl, CURLOPT_POST, true); curl_setopt($curl, CURLOPT_POSTFIELDS, $curl_post_data); $curl_response = curl_exec($curl); curl_close($curl); $xml = new SimpleXMLElement($curl_response); Please help, I'm not sure what I'm doing wrong.... EDIT. When starting this post, I thought it was causing the browser to make an extra request to the server. I've since found this wasn't the case, however, didn't change the title of this post, and can't seem to change it to something like "Critique of file caching script"
I am trying to cache a file, and put together the following script. I put the following in the browser:
https://test.sites.example.com/administrator/index.php?cid=2&controller=sell&id=643341356... and Apache will rewrite as: https://test.sites.example.com/index.php?admin=administrator&cid=2&controller=sell&id=643341356index.php includes the following line: <script src="/lib/js/clientConstants.php?b=1" type="text/javascript"> I have some fairly small text files (2K) which are parsed where certain deliminated fields are replaced with values provided in an associated array. There will typically be less than 5 replaced fields, and I plan on using preg_replace_callback to find and replace them.
I am contemplating caching the files after being parsed (note that they will only be accessed by PHP, and not directly by Apache).
Do you think doing so will provide any significant performance improvement?
If I do go this route, I would like thoughts on how to proceed. I am thinking something like the following:
Append the filename along with the serialized value of the array, and hash it using md5().
Store the parsed file using name "file_".$hash
Get the modification time of the newly created file using filemtime(), and store the value in a new file called "time_".$hash.
bla bla bla
When the next request comes in to parse a file, create the hash again.
If the file exists for the given hash name, and the time file matches filemtime(), use that file, else parse the original file.
Is this a good approach?
Purpose: Building a search function for a site, that's supposed to be fast, give results as a user types. Queries would be something like: "brand1 brand2 brand3" . My idea is , instead of querying the database on each ajax request. A keyed array is created once. Like: ['brand1' => id , 'brand2 => id2 ]. This array is stored in memory. The next time a query is sent, the array which is instantly available in memory, and can be simply queried $storedArray['brand1']to fetch any existing ids instantly. The array size would be about 750 Kb. 60000 rows. I don't have much experience with caching , so looking for advise whether what I am trying to do even makes any sense or necessary. I know there are solutions like memcache. But not sure if my tiny project requires them. Also does opcache help here ? Would serializing the array be too slow ? Please ask if any questions. Thanks |