CI Proxy Library - Browser simulator (REST, Cookies and Proxy Support, PURE PHP) [NEW] HTTP Authorization |
[eluser]toopay[/eluser]
Hi all. It's been a while, since i use CI in my (commercial) project. What i like from it, beside other framework (i use several framework, i have 2 favorite : Kohana and CI ![]() When i develop some project, i need to have a simple way to call another controller, or even an external resource (ex. api from google, or iptocountry@sourceforge). I create a simple cURL class to do that. This is the (almost) stable version of that, this enough for my needs. I’ll take it to higher level, as soon as i had a long holiday from my work. It have several features, which might help you during development (or even in live web) process. Here's the way you use it Code: // Simple way to use this library Grab it! And i'll be here, if you had some feedback, bug report or some trouble with it. Cheers! Toopay
[eluser]toopay[/eluser]
Well, because this library's main purpose is just to fetch/send simple http get request, one feature from this library, which distinguish this library with (*sigh) other CI curl library is : It can maintain image url from targeted site (see the different, in attachment image below. For comparing, i use phil's curl library to generate failed rendering page). It maintain anchor tag and form url too.
[eluser]toopay[/eluser]
Uh-Oh! Updated to V 1.0.1 MORE FEATURES(All old feature still there): 1. Get Full HTTP Header. 2. Set Proxy Call. 3. Set Delay HTTP Call. 4. Set user agent. 5. Internal cache (using gzip). 6. Persistent call (processing redirect, either from header or meta) 7. NO CURL OR OTHER FANCY STUFF DEPEDENCIES! PURE PHP. 8. Cookie support. 9. Log and error flag. Code: $this->load->library('proxy');
[eluser]ClaudioX[/eluser]
Very, very nice! When you call another controller (in the same site/server), the delay is big?
[eluser]toopay[/eluser]
No! It fast(test it). The delay is optional. You can use set_delay() function, if you made more than one site(chaining call).
[eluser]toopay[/eluser]
V 1.0.2 cleaning (some debug's crap) and fixing structure also adding ReadMe.txt
[eluser]toopay[/eluser]
[UPDATED]V 1.0.3 MORE FEATURES(All old feature still there): 1. Get Crawled Web Content. 2. Optimize maintained rendered html : css and js path. Code: // Wanna see how CodeIgniter Forums looks like in searching engine's spider eyes ?
[eluser]tieungao[/eluser]
Hi i have some question about this lib : 1. After i crawl and get webpage return with this, how i can get special information from this page? Before i have used HTML Dom Parse to get info from site. 2. Anything different from browser by this lib and real user browser
[eluser]toopay[/eluser]
[quote author="tieungao" date="1303459111"] 1. After i crawl and get webpage return with this, how i can get special information from this page? Before i have used HTML Dom Parse to get info from site. [/quote] crawl() function will generates an informative array coressponding with url. Including meta tags and anchor(link). If you want to get full response, use site() or head(). Its have (optional) render functionality, which maintain image, anchor/link, js and css. [quote author="tieungao" date="1303459111"] 2. Anything different from browser by this lib and real user browser[/quote] Basicly, It simulates a real web browser, only that you use it with lines of code rather than mouse and keyboard. |
Welcome Guest, Not a member yet? Register Sign In |