"YOU AND THE ART OF ONLINE DATING" is the only product on the market that will take you step-by-step through the process of online dating, provide you with the resources to help ensure success. Get it now!
So in our scenario, it appears that monolithic cache objects are going to be preferred. With big fat pipes in the datacenter, it takes virtually no perceptible time for ~30 MB of serialized product data to cross the wire. Using a Dictionary.
Up vote 2 down vote favorite share g+ share fb share tw.
I'm working with a service backed by an EAV schema, so we're putting caching in place to minimize the perceived performance deficit imposed by EAV when retrieving all primary records and respective attribute collections from the database. We will prime the cache on service startup. We don't have particularly frequent calls for all products -- clients call for differentials after they first populate their local cache with the object map.
In order to perform that differential, the distributed cache will will need to reflect changes to individual records in the database that are performed on an arbitrary basis, and be processed for changes as differentials are called for by clients. First thought was to use a List or Dictionary to store the records in the distributed cache -- get the whole collection, manipulate or search it in-memory locally, put the whole collection back into the cache. Later thinking however led to the idea of populating the cache with individual records, each keyed in a way to make them individually retrievable from/updatable to the cache.
This led to wondering which method would be more performant when it comes to updating all data. We're using Windows Server AppFabric, so we have a BulkGet operation available to us. I don't believe there's any notion of a bulk update however.
Is there prevailing thinking as to distributed cache object size? If we had more requests for all items, I would have concerns about network bandwidth, but, for now at least, demand for all items should be fairly minimal. And yes, we're going to test and profile each method, but I'm wondering if there's anything outside the current scope of thinking to consider here.
C# .net caching appfabric distributed-caching link|improve this question asked Apr 22 '11 at 0:26andrewbadera1,007514 33% accept rate.
There is some discussion on this topic on a recent . Net Rocks podcast where the guest is Udi Dahan. The podcast is about CQRS and they discuss how one of the key advantages of the pattern is how it enable better caching.
Might give you some ideas... – David Hall Apr 22 '11 at 0:29.
So in our scenario, it appears that monolithic cache objects are going to be preferred. With big fat pipes in the datacenter, it takes virtually no perceptible time for ~30 MB of serialized product data to cross the wire. Using a Dictionary we are able to quickly find products in the collection in order to return, or update, the individual item.
With thousands of individual entities, all well under 1 MB, in the cache, bulk operations simply take too long. Too much overhead, latency in the network operations. Edit: we're now considering maintaining both the entities and the monolithic collection of entities, because with the monolith, it appears that retrieving individual entities becomes a fairly expensive process with a production dataset.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.