Chunked transfers are a work-around to permit sending documents whose content-length is not known in advance. ASP.NET, however, by default caches the entire output and hence does know the overall content length Of course, HTTP is layered over TCP, and behind the scene TCP is "chunking" anyhow by splitting even a monolithic HTTP response into packets - meaning that if you specify the content-length up front and disable output buffering, you'll get the best latency without requiring HTTP-level chunking Thus, you don't necessarily need HTTP-level chunking to provide a fast first byte Although I'm not an expect on HTTP, I have implemented a simple streaming media server with seeking support, dynamic compression, caching etc. And I do have a reasonable grasp of the relevance of a fast first byte - and chunking is generally an inferior option if you know the content-length - which is almost certainly why ASP. NET won't let you set it manually - it's just not necessary However, if you don't know the HTTP content length before transmission and buffering is too expensive, you turn off output buffering and presumably the server will use a chunked transfer encoding by necessity When does the server use chunked transfer encoding?
I just tested, and indeed if context.Response. BufferOutput is set to false and when the content length is not set, the response is chunked; such a response is 1-2% larger in my entirely non-scientific quick test of a 1.7MB content-encoding: gzip xml document. Since gzip relies on context to reduce redundancy, I'd expected the compression ratio to suffer more, but it seems that chunking doesn't necessarily greatly reduce compression ratios If you look at the framework code in reflector, it seems that the transfer encoding is indeed set automatically as needed - i.e.
If buffering is off AND no content length is known AND the response is to an HTTP/1.1 request, chunked transfer encoding is used. However, if the server is IIS7 and this is a worker request (?integrated mode? ), the code branches to a native method - probably with the same behavior, but I can't verify that.
Chunked transfers are a work-around to permit sending documents whose content-length is not known in advance. ASP.NET, however, by default caches the entire output and hence does know the overall content length. Of course, HTTP is layered over TCP, and behind the scene TCP is "chunking" anyhow by splitting even a monolithic HTTP response into packets - meaning that if you specify the content-length up front and disable output buffering, you'll get the best latency without requiring HTTP-level chunking.
Thus, you don't necessarily need HTTP-level chunking to provide a fast first byte. Although I'm not an expect on HTTP, I have implemented a simple streaming media server with seeking support, dynamic compression, caching etc. And I do have a reasonable grasp of the relevance of a fast first byte - and chunking is generally an inferior option if you know the content-length - which is almost certainly why ASP. NET won't let you set it manually - it's just not necessary.
However, if you don't know the HTTP content length before transmission and buffering is too expensive, you turn off output buffering and presumably the server will use a chunked transfer encoding by necessity. When does the server use chunked transfer encoding? I just tested, and indeed if context.Response.
BufferOutput is set to false, and when the content length is not set, the response is chunked; such a response is 1-2% larger in my entirely non-scientific quick test of a 1.7MB content-encoding: gzip xml document. Since gzip relies on context to reduce redundancy, I'd expected the compression ratio to suffer more, but it seems that chunking doesn't necessarily greatly reduce compression ratios. If you look at the framework code in reflector, it seems that the transfer encoding is indeed set automatically as needed - i.e.
If buffering is off AND no content length is known AND the response is to an HTTP/1.1 request, chunked transfer encoding is used. However, if the server is IIS7 and this is a worker request (?integrated mode? ), the code branches to a native method - probably with the same behavior, but I can't verify that.
2 @Eamon, Chunked transfers are not a workaround, they are a feature. I think you know this, but when the size of the content is unknown and potentially large at the time the first response bytes are written, then it is incorrect, potentially dangerous, and will result in realy poor performance if ASPNET attempts to cache the entire response before sending it. Regarding the use of BufferOutput - Can you cite the source that you "just checked"?
Do you mean you tested it? I observed that behavior as well. What I'm looking for is a documented description.
Does Response.Flush() do it? Etc. – Cheeso Apr 26 '10 at 19:11 I observed it.
If you look at the code in reflector, turning off bufferoutput is effectively equivalent to calling flush after each write; and each flush that is not final checks and sets chunked transfer encoding if headers haven't been written yet, haven't been suppressed, the client isn't disconnected, this isn't the "final" flush, the response's content length isn't set manually, and the http version is 1.1 - with the caveat that there's some code that checks for IIS7 and does something else for that, which looks more complex. – Eamon Nerbonne Apr 27 '10 at 5:39 Anyhow, the point is that if you're worried about buffering overlarge responses - turn off buffering; don't explicitly worry about chunked transfer encoding (which is simply the mechanism by which the response can be sent when buffering is disabled and the server can't otherwise deduce content length). You don't need to manually enable chunked transfer encoding, and as far as I can tell - there's no reason to.
– Eamon Nerbonne Apr 27 '10 at 5:41 @Eamon - you seem to think that an app should never care whether buffering or chunked transfer occurs. But that's not true. Suppose the data to be transferred is large, and the size is known.
Let's say the app knows the response is going to be exactly 1gb. It makes sense for the app to explicitly make sure that no buffering occurs, which implies chunked encoding. You seem to overlook that possibility.
Or consider time-to-first-byte, which is smaller with chunked transfer than without. Just two examples. The point is, there are good reasons to make explicit use of this feature of HTTP.
– Cheeso Apr 27 '10 at 15:28 1 No, that's not true - an app may care whether it's buffered or not - but chunked transfer encoding is just one way that an app can be "unbuffered". If you explicitly set a content-length and turn off buffering, you still won't have chunked transfer, but will have low latency - in fact, it'll be slightly faster than chunked transfer encoding since the total response size will be smaller. So, I'm not saying an app shouldn't care about being unbuffered, I'm saying an app shouldn't care about how being unbuffered is implemented - chunked transfer encoding is not needed for low latency!
– Eamon Nerbonne Apr 27 '10 at 15:50.
It looks like you need to setup IIS for this. IIS 6 has a property AspEnableChunkedEncoding in the metabase and you can see the IIS 7 mappings for this on MSDN at msdn.microsoft.com/en-us/library/aa96502...).aspx. This will enable you to set TRANSFER-ENCODING: chunked in your header.
I hope this helps.
Thanks. But AspEnableChunkedEncoding is by default true, so that's not the problem. Also, this doesn't answer the specific question about use of chunked encoding within ASPNET.
– Cheeso Apr 21 '10 at 16:22.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.