Probably not the fastest but it will be the most versatile.
Probably not the fastest but it will be the most versatile... int lines = 0; /* if you need to use an encoding other than UTF-8 you way want to try... new StreamReader("filename. Txt", yourEncoding) ... instead of File. OpenText("myFile.
Txt") */ using (var fs = File. OpenText("myFile. Txt")) while (!fs.
EndOfStream) { fs.ReadLine(); lines++; } ... this will probably be faster ... if you need even more speed you might try a Duff's device and check 10 or 20 bytes before the branch int lines = 0; var buffer = new byte32768; var bufferLen = 1; using (var fs = File. OpenRead("filename. Txt")) while (bufferLen > 0) { bufferLen = fs.
Read(buffer, 0, 32768); for (int I = 0; I.
1 uhh.. when people say "read 32kb at a time" they mean 32768, not 32000. Disk access is fastest when reading in multiples of the cluster size (or equiv), which is going to be a power of 2. – darron Oct 9 '09 at 18:43 1 If you really want to optimise it you could check for the real block size of the device.
Like if you are streaming over a network it may be much smaller and a massive SAN could be larger... but that's probably over kill. – Matthew Whited Oct 9 '09 at 18:47 1 Note that this only works for some encodings... – Jon Skeet Oct 9 '09 at 18:47 2 I think the point Matthew is making is that you can load things into a buffer to save memory. Not the method of counting the newlines.
– Robert Oct 9 '09 at 19:01 1 It won't work for UTF-16... it would count U+0A00 to U+0AFF as newlines incorrectly. – Jon Skeet Oct 9 '09 at 19:28.
Unless you've got a fixed line length (in terms of bytes) you'll definitely need to read the data. Whether you can avoid converting all the data into text or not will depend on the encoding. Now the most efficient way will be reinier's - counting line endings manually.
However, the simplest code would be to use TextReader.ReadLine(). And in fact, the simplest way of doing that would be to use my LineReader class from MiscUtil, which converts a filename (or various other things) into an IEnumerable. You can then just use LINQ: int lines = new LineReader(filename).Count(); (If you don't want to grab the whole of MiscUtil, you can get just LineReader on its own from this answer.) Now that will create a lot of garbage which repeatedly reading into the same char array wouldn't - but it won't read more than one line at a time, so while you'll be stressing the GC a bit, it's not going to blow up with large files.
It will also require decoding all the data into text - which you may be able to get away without doing for some encodings. Personally, that's the code I'd use until I found that it caused a bottleneck - it's a lot simpler to get right than doing it manually.Do you absolutely know that in your current situation, code like the above will be the bottleneck? As ever, don't micro-optimise until you have to... and you can very easily optimise this at a later date without changing your overall design, so postponing it isn't going to do any harm.
EDIT: To convert Matthew's answer to one which will work for any encoding - but which will incur the penalty of decoding all the data, of course, you might end up with something like the code below. I'm assuming that you only care about \n - rather than \r, \n and \r\n which TextReader normally handles: public static int CountLines(string file, Encoding encoding) { using (TextReader reader = new StreamReader(file, encoding)) { return CountLines(reader); } } public static int CountLines(TextReader reader) { char buffer = new char32768; int charsRead; int count = 0; while ((charsRead = reader. Read(buffer, 0, buffer.
Length)) > 0) { for (int I = 0; I.
I think your solution will be slow. You'd be unnecessarily parsing and converting all the text to the proper internal encoding. A lot of overhead when all we want is count the eof's – Toad Oct 9 '09 at 19:05 @reinier: I think you EOL not EOF... pretty sure you only need one EOF :) – Matthew Whited Oct 9 '09 at 19:13 2 It won't be the fastest possible way, no.It will easily work with any encoding you throw at it, however, and it's very simple to get right.
Those usually matter more than getting the absolute fastest code in my experience. I would only start micro-optimising after I'd got good evidence that the simplest way was too slow. – Jon Skeet Oct 9 '09 at 19:14 I marked Matthew as the answer, but your answer and comments have been helpful, and I take them on board, Jon.
– f100 Oct 9 '09 at 13:25.
If it's a fixed record you can get the size of a record and then divide the total file size by that amount to get the number of records. If you're just looking for an estimate, what I've done in the past is just read the first x rows (e.g. 200) and use that to come up with an average row size which you can then use to guess the total number of records (divide total file size by average row size). This works well if your records are going to be fairly uniform and you don't need an exact count.
I've used this on large files (do a quick check to get the file size, if it's over 20 MB then get an estimate rather than reading the entire file). Other than that, the only 100% accurate way is to go through the file line by line using ReadLine.
They aren't fixed records, but that optional estimate technique is cool. – f100 Oct 11 '09 at 13:15.
I'd read it 32kb's at a time(or more), count the number of \r\n's in the memoryblock and repeat until done.
2 Just don't miss the case where there is a CR last in a block, and LF first in the next... – Guffa Oct 9 '09 at 18:41.
The simplest: int lines = File. ReadAllLines(fileName). Length; This will of course read all of the file into memory, so it's not memory efficient at all.
The most memory efficient is reading the file as a stream and looking for the line break characters. This will also be the fastest, as it's a minimum of overhead. There is no shortcut that you can use.
Files are not line based, so there is no extra information that you can use, one way of the other you have to read and examine every single byte of the file.
If you don't explain why, it's rather pointless. – Guffa Oct 9 '09 at 19:46 1 Maybe because the question is "what's the fastest and/or most memory efficient way" and you specified the "simplest"? Either way, you're not below 0 right now, so your comment makes little sense.
– bzlm Oct 11 '09 at 12:29 @bzlm: Why doesn't my comment make sense? If someone downvotes an answer I want to know what they think is wrong with it.So far none of the persons who have downvoted have said anything about why they think it's a bad answer. – Guffa Mar 5 at 5:08 Right, but I have.
:) – bzlm Mar 5 at 11:14.
I believe Windows uses two characters to mark the end of the line (10H and 13H if I recall correctly), so you only need to check every second character against these two.
You cannot always assume a file is coming from a windows editor. – Aaron Oct 10 '09 at 20:09.
Since this is a purely sequential process with no dependencies between locations, consider map/reduce if data is really huge. In C/C++, you can use OpenMP for parallelism. Each thread will read a chunk and count CRLF in that chunk.
Finally, in the reduce part, they will sum their individual counts. Intel Threading Building Blocks provide you C++ template based constructs for parallelism. I agree this is a sledge hammer approach for small files but from a pure performance perspective, this is optimal (divide and conquer).
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.