HttpWebRequest and HttpWebResponse ideal async buffer sizes?

I think a better solution would be not to worry about the buffer sizes too much, but don't block the threads. If you pass a delegate to the callback parameter of the Begin methods, that callback is executed when the operation completes and you can call End from there, which will (almost) immediately return. No blocking necessary And regarding the buffer sizes, if they really matter to you, you should profile and find out what works best in your specific situation.

I think a better solution would be not to worry about the buffer sizes too much, but don't block the threads. If you pass a delegate to the callback parameter of the Begin* methods, that callback is executed when the operation completes and you can call End* from there, which will (almost) immediately return. No blocking necessary.

And regarding the buffer sizes, if they really matter to you, you should profile and find out what works best in your specific situation.

If you pass a delegate to the callback parameter of the Begin* methods, that callback is executed when the the operation completes and you can call End* from there, which will (almost) immediately return. No blocking necessary. Funny, who does the real "Begin*" job.It is a thread from ThreadPool that does the call+waiting for you.

What is different than creating your own thread that does the same job? – L. B Oct 5 at 21:14 1 That depends.

But in case of IO operations, IO completion ports are used. That means the Begin method starts the work and sets up the port. When the work is completed, port fires and the callback is executed on a thread pool thread.

And it's different from creating new thread for that, because creating new thread always allocates a million bytes. – svick Oct 5 at 21:20 So you say,BeginA wait in a delegate fxn, in that fxn start BeginB wait in another delegate etc.(instead of calling A();B() in a thread)Do you really know that his code requires that complexity.As I said in my answer "premature optimization is the root of all evil" – L. B Oct 5 at 21:39 I didn't say anything about any waiting.

And this is not a premature optimization if there is “large number of concurrent requests”, as the OP said. – svick Oct 5 at 21:45 I know. Waiting is done by threads created(or already working) with BeginA* functions, and your delegate is called when result is ready.

It was just to simplify the workflow. I don't know how much you may find it related to this question but, I love this blog(chaosinmotion. Com/blog/?

P=622) – L. B Oct 5 at 22:01.

There's no definitive rule as to the actual values you should set, beyond avoiding the obvious extremes. It really depends on the type of data you're transfering, and how much of it there is. You probably want to set your write buffer quite high, but leave your read buffer lower.

This is because writes are (usually) more expensive than reads when it comes to this kind of thing. The best thing to do in this situation is try a few values and see how well they scale. You can always change them later if necessary.

In the end "Request->Write->Response->Read" should be done in a sequential manner(even if you do it async). Why don't you execute this chain in a seperate thread without asyncs instead of dealing with minor optimizations. "premature optimization is the root of all evil.

Well, most of the time a request is in a wait state. I could create hundreds of threads but that's really nothing but wasting system resources, since these requests are I/O-bound. All these threads would have to be managed by the scheduler, yet they'd mostly just wait.

I'm using a low number of threads to minimize the concurrency, but the issue won't go away, since creating 1 thread per request is not really an option. – xxbbcc Oct 5 at 20:03 I have to say you are wrong. As you said "most of the time a request is in a wait state".

So you should have only one thread waiting requests, and as soon as a new request arrives, it should create a new worker thread to do the job as I said in my answer. – L. B Oct 5 at 20:09 I have a feeling (maybe wrong) that you think I'm serving these requests.

I'm sending them and waiting for server response. So the setup is a few threads that keep sending out the requests and wait on the response. Once a response is in, I read the stream and then save the data.

There's no point in creating a lot of threads because the actual processing of the response data is not that much. Waits happen in the threadpool's I/O threads - my threads just get the notifications and go through the async workflow. – xxbbcc Oct 5 at 22:55.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions