How to scrape websites when cURL and allow_url_fopen is disabled?

If cURL and allow_url_fopen are not enabled you can try to fetch the content via fsockopen — Open Internet or Unix domain socket connection In other words, you have to do HTTP Requests manually. See the example in the manual for how to do a GET Request. The returned content can then be further processed.

If sockets are enabled, you can also use any third party lib utilitzing them, for instance Zend_Http_Client On a sidenote, check out Best Methods to Parse HTML for alternatives to SimpleHTMLDom.

If cURL and allow_url_fopen are not enabled you can try to fetch the content via fsockopen — Open Internet or Unix domain socket connection In other words, you have to do HTTP Requests manually. See the example in the manual for how to do a GET Request. The returned content can then be further processed.

If sockets are enabled, you can also use any third party lib utilitzing them, for instance Zend_Http_Client. On a sidenote, check out Best Methods to Parse HTML for alternatives to SimpleHTMLDom.

1 didn't know you could use fsockopen even if allow_url_fopen is disallowed. – NikiC Oct 20 '10 at 18:18.

CURL is a specialty API. It's not the http library it's often made out to be, but a generic data transfer library for FTP,SFTP,SCP,HTTP PUT,SMTP,TELNET,etc. If you want to use just HTTP, there is an according PEAR library for that. Or check if your PHP version has the official http extension enabled.

For scraping try phpQuery or querypath. Both come with builtin http support.

I think querypath uses DOM's loading facilities and afaik those depend on allow_url_fopen. Phpquery on the other hand uses Zend_Http_Client so that might be an option. The PEAR library is a good call too.It's an implementation on top of fsockopen.

– Gordon Oct 7 '10 at 11:01.

If you're just wanting to grab the generated HTML of a web page, then use the file_get_contents() function.

The OP's host has allow_url_fopen disabled, so that won't work. – Gordon Oct 7 '10 at 11:03.

File_get_contents() is the simplest method to grab a page without installing extra libraries.

That's the same answer as Martin's above. Unless your own answers do add something new, you are encouraged to upvote the original answer instead of repeating them (especially when they are not applicable for the OP's problem like in this case). – Gordon Oct 8 '10 at 17:42 file_get_contents() isn't an option.

– Nate Shoffner Oct 9 '10 at 17:38.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions