This will use less memory, since it doesn't load the whole file into memory.
This will use less memory, since it doesn't load the whole file into memory: $file="largefile. Txt"; $linecount = 0; $handle = fopen($file, "r"); while(!feof($handle)){ $line = fgets($handle); $linecount++; } fclose($handle); echo $linecount; fgets loads a single line into memory (if the second argument $length is omitted it will keep reading from the stream until it reaches the end of the line, which is what we want). This is still unlikely to be as quick as using something other than PHP, if you care about wall time as well as memory usage.
The only danger with this is if any lines are particularly long (what if you encounter a 2GB file without line breaks? ). In which case you're better off doing slurping it in in chunks, and counting end-of-line characters: $file="largefile.
Txt"; $linecount = 0; $handle = fopen($file, "r"); while(!feof($handle)){ $line = fgets($handle, 4096); $linecount = $linecount + substr_count($line, PHP_EOL); } fclose($handle); echo $linecount.
Thanks for the explanation Dominic - that looks good. I had a feeling it had to be done line by line and not letting count of file load the whole thing into memory! – Abs Jan 29 '10 at 14:38 The only danger of this snippet are huge files without linebreaks as fgets will then try to suck up the whole file.It'd be safer to read 4kB chunks at a time and count line termination characters.
– David Schmitt Jan 29 '10 at 14:51 @David - how does my edit look? I'm not 100% confident about PHP_EOL - does that look right? – Dominic Rodger Jan 29 '10 at 14:58 not perfect: you could have a unix-style file (\n) being parsed on a windows machine (PHP_EOL == '\r\n') – nickf Jan 29 '10 at 15:01 @nickf - good point.
How would you address it? How does fgets work? – Dominic Rodger Jan 29 '10 at 15:23.
If you're running this on a Linux/Unix host, the easiest solution would be to use exec() or similar to run the command wc -l $path. Just make sure you've sanitized $path first to be sure that it isn't something like "/path/to/file ; rm -rf /".
I am on a windows machine! If I was, I think that would be the best solution! – Abs Jan 29 '10 at 14:39 that is a non portable solution.
– ghostdog74 Jan 29 '10 at 16:33 1 @ghostdog74: Why, yes, you're right. It is non-portable. That's why I explicitly acknowledged my suggestion's non-portability by prefacing it with the clause "If you're running this on a Linux/Unix host...".
– Dave Sherohman Jan 30 '10 at 10:11.
Yes. Open the file, read it line by line and increment a counter for each line.
You have several options. The first is to increase the availble memory allowed, which is probably not the best way to do things given that you state the file can get very large. The other way is to use fgets to read the file line by line and increment a counter, which should not cause any memory issues at all as only the current line is in memory at any one time.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.