PHP - Getting All Links From A Webpage
There are different methods to catch links in <a> tag from a webpage. But surprisingly, with none of them I was successful to catch all links from e.g. http://www.amazon.com/Notebooks-Laptop-Computers/b/ref=sa_menu_lapnet6?ie=UTF8&node=565108
All links are captured except a series in the main area; e.g. http://www.amazon.com/s/ref=amb_link_85318851_3?ie=UTF8&node=565108&field-availability=-1&brand=acer&emi=ATVPDKIKX0DER&pf_rd_m=ATVPDKIKX0DER&pf_rd_s=center-6&pf_rd_r=1HET3VVW5SXMJ0SRKJVM&pf_rd_t=101&pf_rd_p=1291722382&pf_rd_i=565108 http://www.amazon.com/s/ref=amb_link_85318851_22?ie=UTF8&node=1232596011&brand=samsung&pf_rd_m=ATVPDKIKX0DER&pf_rd_s=center-6&pf_rd_r=1HET3VVW5SXMJ0SRKJVM&pf_rd_t=101&pf_rd_p=1291722382&pf_rd_i=565108 Similar Tutorialstry { echo "<br>"; foreach($dbh->query("SELECT * FROM test_shot WHERE sold=1 ORDER BY year ASC") as $row) { if($row['picture'] != "" && $row['picture'] != null) { echo "<div class='image-holder'><img src ='".$row['picture']."' width=300px /><br>"; } if($row['year'] != "" && $row['year'] != null) { echo $row['year']; } if($row['description'] != "" && $row['description'] != null) { echo $row['description']; } if($row['sold'] == 1) { echo "<img src='images/sold1.png'><br>";//Add your image code here } elseif ($row['sold'] == 0) { echo "</div><br>"; } } } catch (PDOException $e) { print $e->getMessage(); } ?>
<html> <?php $id = $_GET['id']; $dbusername="web148-matt"; $dbpassword="matt"; $dbdatabase="web148-matt"; mysql_connect(localhost,$dbusername,$dbpassword); @mysql_select_db($dbdatabase) or die( "Unable to select database"); mysql_query("UPDATE count SET clicks=clicks+1 WHERE id='$id'"); $sql = mysql_query("SELECT link FROM count WHERE id='$id'"); $fetch = mysql_fetch_row($sql); $result = mysql_query("SELECT * FROM count"); while($row = mysql_fetch_array($result)) { echo "<a href=" .$row['link']. ">Link</a>"; } ?> <a href='http://www.google.com'>Google</a> <a href='/index.php?id=2'>link2</a> </html> So I have been working on my website for a while which all is php&mysql based, now working on the social networking part building in similar functions like Facebook has. I encountered a difficulty with getting information back from a link. I've checked several sources how it is possible, with title 'Facebook Like URL data Extract Using jQuery PHP and Ajax' was the most popular answer, I get the scripts but all of these scripts work with html links only. My site all with php extensions and copy&paste my site links into these demos do not return anything . I checked the code and all of them using file_get_contents(), parsing through the html file so if i pass 'filename.php' it returns nothing supposing that php has not processed yet and the function gets the content of the php script with no data of course. So my question is that how it is possible to extract data from a link with php extension (on Facebook it works) or how to get php file executed for file_get_contents() to get back the html?
here is the link with code&demo iamusing: http://www.sanwebe.c...-php-and-jquery
thanks in advance.
Hi. I found this code for my website, and it work well. Its just that there comes some text I wan't to delete. Code: Code: [Select] <?php /* index.php */ Session_start(); if($_SESSION['login'] == false) { header("Location:login.php"); } $a = $_GET['a']; $source='http://****.elementfx.com/test.php'; //$source='sample.txt'; $page_all = file_get_contents($source); $div_array=array(); preg_match_all('#<div id="intro">(.*?)</div>#sim', $page_all, $div_array); //print_r($div_array); ?> <html> <head> <title>Home</title> </head> <body> <center> <p><b><font color="blue" size="20">*****</font></b> <font color="blue" size="2">version 0.9_01</font></p> <br/> <br/> <br/> <textarea cols="50" rows="10"><?php print_r($div_array[1]);?></textarea> </center> </body> </html> The text it should get is: Quote Hello I'm Something! <p>asdoasduiasdasnda</p> asdasdaksdjas<br/> sdffdsg But the output is: Quote Array ( => Hello I'm Something! <p>asdoasduiasdasnda</p> asdasdaksdjas<br/> sdffdsg ) I need to get rid of the Array( ... thing.. Regards Worqy I have a particular PHP file which is publicly located, however, I don't want anyone but me to access. Below are my thoughts how to do so. Please comment.
Use an uncommon name, and definitely not index.php.
Either include a file called index.html in the same directory, or set up Apache not to show them using Options -Indexes, or maybe both for good measure.
Require some variable to be set to a given value in either the GET or POST array, and if not set, throw a 404 header and display the 404 missing file HTML. Hi, I kow this is probably something simple, but i have 2 links to a RSS feed which i need to put up on a web page. Not just a link, but the actual feed rendered with a number of news headlines. One is a ".rss2.XML" link and the ither is a html link. the HTML i suppose is static, so probably not any good. thanks in advance Just need a correction on this please, there is a Parse error, think I got the quote marks wrong somewhere? echo "<li>" . <a href="/stock/$stock.htm"> .$Stock . ": " . $Name . "</a></li>\n"; Thanks Hi, I want to save an webpage into my server location, as simple as that. At present, I am reading the content of the file using curl, save that content into a text file, then save that text file. I assume there must be some straightforward way just to save www.example.com/file.html into my server directory as file.html. Can anyone help me on this, please? Thanks, -Abd Hey all, What's the most efficient way to wait until a page on your own website is done being rendered, and then parse it for something specific? The reason I'm having to scrape it rather than just generate it myself is because the part being scraped if being generated in an iframe on my site via another site, and the data inside of it is dynamic. Thanks Hi Guys, Yet again, I was wondering if one of you genious's could help me. I want to be able to take a screenshot of an entire webpage (From the header to footer) using a PHP script. So If I crawled the webpage URL = http://www.google.co.uk/search?q=php+sc ... =en&num=10 I would like to take a screenshot of that entire page from the GOOGLE header to the GOOGLE footer and store it into a TEMPORARY folder on my server and then be able to call it back. Is this possible? I've been searching everywhere for a solution to this but the closest thing I came to was http://mistonline.in/wp/get-youtube-vid ... avascript/ I would be really grateful if one of you could please help me, thank you in advance. M I've used get_file_contents() before but it seems like it took a long time to load... what is the best or most efficient way to get certain content from a webpage? Like all the posts on a single forum page for example, without loading any images or styles, just the "source" is there some way of checking the pagename in php? for example, home.php also on a side question does anyone know how to autoplay a .avi video without showing anything but the actual video.. (im meaning i dont want to see the progress bar or play, stop, pause buttons etc) all help would be great. So we just started php after learning a month of HTML, and we need to create a website with a page that allows the admin to change the colors and font size, header, etc on the webpage itself via forms. E.g. I'm on the site and I can use a drop down menu to change the background to "blue" for example. No need to go edit the CSS file or anything. How can be this done? My second and final question is how would I write code that allows my website to have users "log in" to their account so they can edit their page? Thanks a lot, I'm not good at PHP yet I've just started so please keep that in mind:) - Kranti Hello im using mybb forum, and i have another webpage what i want to access with forum username and password. Dont know how to make it work.
Hey agen i have problem i need info from other webpage Code: [Select] <?php // get as ? $url=$_GET['url']; // value of script $website = $url; $referer = 'olar.eu'; $useragent = 'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)'; $filename = basename($url); // connect $curl_handle=curl_init(); curl_setopt($curl_handle,CURLOPT_USERAGENT,$useragent); curl_setopt($curl_handle,CURLOPT_AUTOREFERER,$referer); curl_setopt($curl_handle,CURLOPT_URL,$website); curl_setopt($curl_handle,CURLOPT_CONNECTTIMEOUT,5); curl_setopt($curl_handle,CURLOPT_RETURNTRANSFER,1); $source = curl_exec ($curl_handle); curl_close ($curl_handle); $a=preg_match("/>Level:.*.\n.*.>([0-9]*)</",$source,$b); $Level = $b[1]; echo "Level: $Level"; ?>i get only empty value url=http://www.gamersfirst.com/warrock/?q=Player&nickname=admin-b13r Hi All... I am looking for a way of saving currently open webpage as a pdf file.........I searched for that and even gone for this site http://www.tcpdf.org. But there also I didn't find any perfect way of doing this all....If somebody else has tried tcpdf for that or done something like that, then plzzzzz plzzz plzzzz do help me....Thanx...... I am trying to set the HTML Page Title dynamically, but this code from my book doesn't work... <title> <?php // Dynamically set Page Title. if (isset($page_title)){ echo $page_title; } else { // Default Page Title. echo 'Knowledge is Power: And It Pays To Know'; } ?> </title> What is wrong with it? (All I see is the code in the Webpage/Window Title...) TomTees Hey guys I have the following code and am trying to get the body of the webpage, however it is not currently working and the array at the end is empty. Any help appreciated!!!!! Code: <?php $word = $_GET['word']; function get_web_page( $url,$curl_data ) { $options = array( CURLOPT_RETURNTRANSFER => true, // return web page CURLOPT_HEADER => false, // don't return headers CURLOPT_ENCODING => "", // handle all encodings CURLOPT_USERAGENT => "spider", // who am i CURLOPT_AUTOREFERER => true, // set referer on redirect CURLOPT_CONNECTTIMEOUT => 120, // timeout on connect CURLOPT_TIMEOUT => 120, // timeout on response CURLOPT_MAXREDIRS => 10, // stop after 10 redirects CURLOPT_POST => 1, // i am sending post data CURLOPT_POSTFIELDS => $curl_data, // this are my post vars CURLOPT_SSL_VERIFYHOST => 0, // don't verify ssl CURLOPT_SSL_VERIFYPEER => false, // CURLOPT_VERBOSE => 1 // ); $ch = curl_init($url); curl_setopt_array($ch,$options); $content = curl_exec($ch); $err = curl_errno($ch); $errmsg = curl_error($ch) ; $header = curl_getinfo($ch); curl_close($ch); // $header['errno'] = $err; // $header['errmsg'] = $errmsg; //$header['content'] = $content; return $content; } $curl_data = "?tranword=".$word; $url = "http://www.wordreference.com/es/translation.asp?tranword=".$word; $response = get_web_page($url,$curl_data); preg_match('~<body>(.*)</body>~', $response, $output); print_r($output); ?> Thanks lots, Jake hi all is there anyway embedding excel or .csv files to the webpage other than the folowing two options . saving it as a webpage . using google docs i used the following code, iam getting the .csv file in the page, but it would be good to have the color formatting, bold, italics... also in the webpage. is it possible? Code: [Select] <?php $cnx = fopen("example.csv", "r"); //open example.csv echo("<table style='border:1px solid #ddd;'>"); // echo the table while (!feof ($cnx)) { // while not end of file $buffer = fgets($cnx); // get contents of file (name) as variable $values = explode(",", $buffer); //explode "," between the values within the contents echo "<tr>"; for ( $j = 0; $j < count($values); $j++ ) { // echo("<td style='border:1px solid #ddd;'>$values[$j]</td>"); } echo"</tr>"; }; echo("</table>"); fclose($cnx); //close filename variable ?> |