"YOU AND THE ART OF ONLINE DATING" is the only product on the market that will take you step-by-step through the process of online dating, provide you with the resources to help ensure success. Get it now!
This script should limit the locations and file types that it's going to return Think of somebody trying this: yoursite.com/yourscript.php?textFile=../... (or something similar) Try to find out where delays occur.. does the HTTP request take long, or is the file so large that reading it takes long If the request is slow, try caching results locally If the file is huge, then you could set up a cron job that extracts the last line of the file at regular intervals (or at every change), and save that to a file that your other script can access directly.
This script should limit the locations and file types that it's going to return. Think of somebody trying this: yoursite.com/yourscript.php?textFile=../... (or something similar) Try to find out where delays occur.. does the HTTP request take long, or is the file so large that reading it takes long. If the request is slow, try caching results locally.
If the file is huge, then you could set up a cron job that extracts the last line of the file at regular intervals (or at every change), and save that to a file that your other script can access directly.
Readfile is your friend here it reads a file on disk and streams it to the client. Script 1: This script could further be improved by adding caching to it. But that is more complicated.
The very basic caching could be. Script 2: = $lastModifiedTimeStamp) { header("HTTP/1.0 304 Not Modified"); exit(0); } } header('Content-Length: '. Filesize($fileName)); header('Expires: '.
Gmdate('D, d M Y H:i:s \G\M\T', time() + 604800)); // (3600 * 24 * 7) header('Last-Modified: '. Date('D, d M Y H:i:s \G\M\T', $lastModifiedTimeStamp));?
– Abs Jun 10 '09 at 12:20 'script 1' replaces your 2 scripts. Readfile() reads the entire file and sends it to the browser. If you are unsure, forget about 'script 2' – Jacco Jun 10 '09 at 12:24 Does file_exists work on remote files (URLs)?
– Abs Jun 10 '09 at 12:29 more info here (yes it can read remote files): nl2.php. Net/file_exists – Jacco Jun 10 '09 at 12:58.
If you've done that, read on: You cannot really optimize the first script usefully: If you need a http-request, you need a http-request. Skipping the http request could be a performance gain, though, if it is possible (i.e. If the first script can access the same files the second script would operate on).
As for the second script: Reading the whole file into memory does look like some overhead, but that is neglibable, if the files are small. The code looks very readable, I would leave it as is in that case. If your files are big, however, you might want to use fopen() and its friends fseek() and fread() # Do not forget to sanitize the file name here!
# An attacker could demand the last line of your password # file or similar! ($fileName = '../../passwords. Txt') $filePointer = fopen($fileName, 'r'); $i = 1; $chunkSize = 200; # Read 200 byte chunks from the file and check if the chunk # contains a newline do { fseek($filePointer, -($i * $chunkSize), SEEK_END); $line = fread($filePointer, $i++ * $chunkSize); } while (($pos = strrpos($line, "\n")) === false); return substr($line, $pos + 1).
Thanks for your insightful reply. I have not used that(xdebug) before but I am looking into it now. The text files I read in are no longer than 20 lines and the lines are about 5/6 words long.
– Abs Jun 10 '09 at 12:20 I wouldn't touch the code, then. The overhead of interpreting the loop could even be greater than the gain with files of that size :) – soulmerge Jun 10 '09 at 12:31 Ah you're right, thanks! :) – Abs Jun 10 '09 at 12:44.
If the files are unchanging, you should cache the last line. If the files are changing and you control the way they are produced, it might or might not be an improvement to reverse the order lines are written, depending on how often a line is read over its lifetime. Edit: Your server could figure out what it wants to write to its log, put it in memcache, and then write it to the log.
The request for the last line could be fulfulled from memcache instead of file read.
Yeh, the files are changing all the time. I like the last idea, reversing the order of lines so I only retrieve the first line. I need every little bit of performance gains I can get!
– Abs Jun 10 '09 at 12:22.
The most probable source of delay is that cross-server HTTP request. If the files are small, the cost of fopen/fread/fclose is nothing compared to the whole HTTP request. (Not long ago I used HTTP to retrieve images to dinamically generate image-based menus.
Replacing the HTTP request by a local file read reduced the delay from seconds to tenths of a second. ) I assume that the obvious solution of accessing the file server filesystem directly is out of the question. If not, then it's the best and simplest option.
If not, you could use caching. Instead of getting the whole file, you just issue a HEAD request and compare the timestamp to a local copy. Also, if you are ajax-updating a lot of clients based on the same files, you might consider looking at using comet (meteor, for example).
It's used for things like chats, where a single change has to be broadcasted to several clients.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.