I think this is very close to what @Zachary suggests. And it (seems to) work(s); actually I think using using as @Zachary does is even "nicer My main point being I cannot see the blocking behaviour of GetResponse() you (seem to) describe In addition the following code only roughly shows how everything works it will not read the stream to the end for example (unless by coincidence :)). But it should work if you copy-n-paste it into an empty "Console Application"-project in Visual Studio You can try using some "shorter" URL for a test.
The example here starts downloading an ISO of the debian distribution (a bit more than 600 MByte) Sorry debian, I did not mean to steal your bandwidth Btw: is there something sensible one can use to test such a scenario? The Code is strongly inspired by C# - How to read a continuous stream of XML over HTTP namespace StreamReadWebRequest { using System; using System.Collections. Generic; using System.
Text; using System. Net; using System. IO; class Program { static void Main(string args) { HttpWebRequest req; HttpWebResponse res = null; try { req = (HttpWebRequest)WebRequest.
Create( "http://cdimage.debian.org/debian-cd/5.0.4/i386/iso-cd/debian-504-i386-CD-1.iso"); res = (HttpWebResponse)req.GetResponse(); Stream stream = res. GetResponseStream(); byte data = new byte4096; int read; while ((read = stream. Read(data, 0, data.
Length)) > 0) { Process(data, read); } } finally { if (res! = null) res.Close(); } Console.In.Read(); } private static void Process(byte data, int read) { Console.Out. Write(ASCIIEncoding.ASCII.
GetString(data)); } } }.
I think this is very close to what @Zachary suggests. And it (seems to) work(s); actually I think using using as @Zachary does is even "nicer". My main point being I cannot see the blocking behaviour of GetResponse() you (seem to) describe.In addition the following code only roughly shows how everything works; it will not read the stream to the end for example (unless by coincidence :)).
But it should work if you copy-n-paste it into an empty "Console Application"-project in Visual Studio. You can try using some "shorter" URL for a test. The example here starts downloading an ISO of the debian distribution (a bit more than 600 MByte).
Sorry debian, I did not mean to steal your bandwidth. -> Btw: is there something sensible one can use to test such a scenario? The Code is strongly inspired by C# - How to read a continuous stream of XML over HTTP.
Namespace StreamReadWebRequest { using System; using System.Collections. Generic; using System. Text; using System.
Net; using System. IO; class Program { static void Main(string args) { HttpWebRequest req; HttpWebResponse res = null; try { req = (HttpWebRequest)WebRequest. Create( "http://cdimage.debian.org/debian-cd/5.0.4/i386/iso-cd/debian-504-i386-CD-1.iso"); res = (HttpWebResponse)req.GetResponse(); Stream stream = res.
GetResponseStream(); byte data = new byte4096; int read; while ((read = stream. Read(data, 0, data. Length)) > 0) { Process(data, read); } } finally { if (res!
= null) res.Close(); } Console.In.Read(); } private static void Process(byte data, int read) { Console.Out. Write(ASCIIEncoding.ASCII. GetString(data)); } } }.
Ditto ... what scherand said :) – Wardy May 18 '10 at 14:57.
If you set the buffer size on your read, you can read in the data in chunks... example... // Get the response stream using(Stream resStream = response. GetResponseStream()) { string parseString = null; int count = 0; do { // Read a chunk of data count = resStream. Read(buf, 0, buf.
Length); if (count! = 0) { // Convert to ASCII parseString = Encoding.ASCII. GetString(buf, 0, count); // Append string to results sb.
Append(tempString); } } while (count > 0); }.
This is after the response is complete, I want to be able to read the response stream before that. – user434917 May 16 '10 at 11:22.
I'm not sure what you have on your side, but I know for a fact (and I'm sure many people will agree here) that GetResponse() will NOT download the whole file back. It will send the request, wait for the response, and get the response headers. After you have the response, you can easily get the response stream with GetResponseStream(), which is the actual data stream that's downloading from the server.
And you can easily access the response stream BEFORE the whole file is downloaded. This is 100% true and tested. If you're not getting the same behaviour (which is really strange, and shouldn't happen) could you add a code example that is not working as I explained above?
Also, do test the example posted by scherand. It just proves once again that it works just fine, without any special hacks.
I think this is very close to what @Zachary suggests. And it (seems to) work(s); actually I think using using as @Zachary does is even "nicer". My main point being I cannot see the blocking behaviour of GetResponse() you (seem to) describe.
If you set the buffer size on your read, you can read in the data in chunks... example...
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.