1

I'm trying to write my first crawler by using PHP with cURL library. My aim is to fetch data from one site systematically, which means that the code doesn't follow all hyperlinks on the given site but only specific links.

Logic of my code is to go to the main page and get links for several categories and store those in an array. Once it's done the crawler goes to those category sites on the page and looks if the category has more than one pages. If so, it stores subpages also in another array. Finally I merge the arrays to get all the links for sites that needs to be crawled and start to fetch required data.

I call the below function to start a cURL session and fetch data to a variable, which I pass to a DOM object later and parse it with Xpath. I store cURL total_time and http_code in a log file.

The problem is that the crawler runs for 5-6 minutes then stops and doesn't fetch all required links for sub-pages. I print content of arrays to check result. I can't see any http error in my log, all sites give a http 200 status code. I can't see any PHP related error even if I turn on PHP debug on my localhost.

I assume that the site blocks my crawler after few minutes because of too many requests but I'm not sure. Is there any way to get a more detailed debug? Do you think that PHP is adequate for this type of activity because I wan't to use the same mechanism to fetch content from more than 100 other sites later on?

My cURL code is as follows:

function get_url($url)
{
    $ch = curl_init();
    curl_setopt($ch, CURLOPT_HEADER, 0);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
    curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 30);
    curl_setopt($ch, CURLOPT_URL, $url);
    $data = curl_exec($ch);
    $info = curl_getinfo($ch);  
    $logfile = fopen("crawler.log","a");
    echo fwrite($logfile,'Page ' . $info['url'] . ' fetched in ' . $info['total_time'] . ' seconds. Http status code: ' . $info['http_code'] . "\n");
    fclose($logfile);
    curl_close($ch);

    return $data;
}

// Start to crawle main page.

$site2crawl = 'http://www.site.com/';

$dom = new DOMDocument();
@$dom->loadHTML(get_url($site2crawl));
$xpath = new DomXpath($dom);
13
  • I found this line in my LAMPP erro_log: [:error] [pid 2996] [client 127.0.0.1:49848] PHP Fatal error: Maximum execution time of 30 seconds exceeded in /opt/lampp/htdocs/clw/clw.php on line 73. I'll try to increase timeout for cURL and retry. Commented Dec 31, 2012 at 19:58
  • I increased the timeout parameter then changed to zero but it did not help. Commented Dec 31, 2012 at 20:11
  • Have you seen if curl is getting any errors? Something like this should work: if( $data == false ) { fwrite( $logfile, curl_error( $ch ); ) } Commented Dec 31, 2012 at 20:42
  • By 'increase timeout for cURL' do you mean you used set_time_limit? Commented Dec 31, 2012 at 20:47
  • 1
    Thanks to kkhugs who suggested to set the time limit to zero within the code. It helped. The following code solved my issue: set_time_limit(0); I also implemented the code which can be found here to avoid memory leak issue. Thread can be closed. Thanks for everyone! gomez Commented Dec 31, 2012 at 23:25

2 Answers 2

1

Use set_time_limit to extend the amount of time your script can run for. That is why you are getting Fatal error: Maximum execution time of 30 seconds exceeded in your error log.

Sign up to request clarification or add additional context in comments.

Comments

0

do you need to run this on a server? If not, you should try the cli version of php - it is exempt from common restrictions

3 Comments

Yes, I would like to run it on a server later on in production.
why would you not be able to run the cli version on a server?
Thanks @TobyAllen my issue was solved already. I'll have enough time to figure out later how will I implement this in production. I'm going to improve my crawler code first (with parallel threads for example).

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.